CARVIEW |
Select Language
HTTP/2 200
date: Wed, 23 Jul 2025 06:54:47 GMT
content-type: text/html; charset=UTF-8
cache-control: s-maxage=36000, max-age=5
last-modified: Tue, 22 Jul 2025 13:52:30 GMT
link: ; rel=preload; as=style,; rel=preload; as=style,; rel=preload; as=style,; rel=preload; as=style,; rel=preload; as=style,; rel=preload; as=style
strict-transport-security: max-age=31536000
content-security-policy: upgrade-insecure-requests
edge-cache-tag: CT-131528738881,CG-3298168043,P-691534,CW-187104649349,CW-187104649527,CW-187105507506,CW-189729484136,CW-190415809340,CW-191913064573,CW-191913064712,CW-191913080068,CW-191913080408,CW-191913080670,CW-191914485575,CW-191914485848,E-142751494144,E-142762480291,E-142762558863,E-143617032215,E-168591795996,E-182000650417,E-182007043681,E-182007308374,E-187267399945,E-187268190600,E-191724244992,E-191748641714,E-191757015191,E-191757745199,E-191757745212,E-191757922448,E-191757922456,E-191757922459,E-191795252125,E-191913064739,E-191913064788,E-191913064825,E-191913065836,E-191914486142,E-191914486457,E-191914488151,E-191914503720,E-191916206874,E-191916207449,E-191916207620,E-191917277727,E-191917277783,E-191917277866,E-191917560667,E-191917560701,MENU-154931019020,MENU-168587867317,PGS-ALL,SW-3,GC-143174116914,GC-165023483770,GC-187118171939,TS-142763766447
permissions-policy: true
referrer-policy: no-referrer-when-downgrade
x-content-type-options: nosniff
x-frame-options: SAMEORIGIN
x-hs-cache-config: BrowserCache-5s-EdgeCache-180s
x-hs-cache-control: s-maxage=36000, max-age=0
x-hs-cf-cache-status: HIT
x-hs-cfworker-meta: {"contentType":"BLOG_POST"}
x-hs-content-id: 131528738881
x-hs-hub-id: 691534
x-hs-prerendered: Tue, 22 Jul 2025 13:52:30 GMT
set-cookie: __cf_bm=TJtpINKBw2k2zL8YDZ8elt0R0_AmDU8mPwdtrRfr_0k-1753253687-1.0.1.1-tm7D.PR3vI_TtS105D7y4qb3KBaGGbpL19Sa8rFKNFtW7B6yQZxG1W0_dOE7Eq5ppG3Etb2qQA53H9S0ugJKcmRsoSNqLaIXBGNe_tFs1Gk; path=/; expires=Wed, 23-Jul-25 07:24:47 GMT; domain=.www.pythian.com; HttpOnly; Secure; SameSite=None
report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=9Wv80%2Bv3cZQFBot2%2F%2FqGIjOI5%2B%2Ff772cL63wNLfGldk%2F24cu9s%2Bq7w9lGuFcQHX58bRF%2F2MiaV8h%2BJThri1jXsG6r1dYcgJAx1GcOyIGFO6%2Bm2PK0RsoM%2FlFKoRjyW7tbg%3D%3D"}],"group":"cf-nel","max_age":604800}
nel: {"success_fraction":0.01,"report_to":"cf-nel","max_age":604800}
vary: Accept-Encoding
set-cookie: _cfuvid=.4135i8i.3bm2AgHyjLyR.X.uD9QCubC41GxfwwVk6o-1753253687824-0.0.1.1-604800000; path=/; domain=.www.pythian.com; HttpOnly; Secure; SameSite=None
server: cloudflare
cf-ray: 963944bc2a36e8e0-BLR
content-encoding: gzip
alt-svc: h3=":443"; ma=86400
Analyze index validate structure - the dark side
Recently a co-worker wanted to discuss a problem he had encountered after upgrading a database. The upgrade plan included steps to verify object integrity; this was being done with
analyze table <tablename> validate structure cascade. All was fine until one particular table was being analyzed. Suddenly it seemed the process entered a hung state. The job was killed and separate commands were created to analyze each object individually. That went well up until one of the last indexes was reached.
Me: How long has it been running?
Coworker: Three days. Yes, you read that correctly, it had been running for three days. My friend ran a 10046 trace to see what the process was doing; nearly all the work was 'db file sequential read' on the table. At this time I suspected it was related to the clustering_factor for the index in question. The analyze process for an index verifies each row in the index. If the cluster is well ordered then the number of blocks read from the table will be similar to the number of blocks making up the table. If however the table is not well ordered relative to the columns in the index the number of blocks read from the table can be many times the total number of blocks that are actually in the table. Consider for a moment that we have rows with an ID of 1,2,3,4 and 5. Let's assume that our index is created on the ID column. If these rows are stored in order in the table, it is very likely these rows will all be in the same block, and that a single block read will fetch all of these rows. If however the rows are stored in some random order, it may be that a separate block read is required for each lookup.
In this case 5 separate blocks must be read to retrieve these rows. In the course of walking the index, some minutes later these rows must also be read:
The blocks where these rows reside are the same blocks as the earlier example. The problem of course is that quite likely the blocks have been removed from cache by this time, and must be read again from disk. Now imagine performing this for millions of rows. With a poor clustering factor the analyze command on an index could take quite some time to complete. This seemed worthy of a test so we could get a better idea of just how bad this issue might be. The test was run with 1E7 rows. The SQL shown below creates 1E7 rows, but you can simply change the value of level_2 to 1e3 to reduce the total rows to 1E6, or even smaller if you like.
Share this
Analyze index validate structure - the dark side
by Jared Still on May 13, 2016 12:00:00 AM
ID | Block Number |
1 | 22 |
2 | 75 |
3 | 16 |
4 | 2 |
5 | 104 |
ID | Block Number |
1048576 | 22 |
1048577 | 75 |
1048578 | 16 |
1048579 | 2 |
1048580 | 104 |
[sourcecode language="sql" padlinenumbers="true"] -- keep this table small and the rows easily identifiable -- or not... -- 1e3 x 1e4 = 1e7 def level_1=1e3 def level_2=1e4 drop table validate_me purge; create table validate_me tablespace alloctest_a -- EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO --alloctest_m -- EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT MANUAL --alloctest_u -- EXTENT MANAGEMENT LOCAL UNIFORM SIZE 65536 SEGMENT SPACE MANAGEMENT AUTO pctfree 0 as select -- for a good clustering factor --id -- -- for a bad clustering factor floor(dbms_random.value(1,1e6)) id , substr('ABCDEFGHIJKLMNOPQRSTUVWXYZ',mod(id,10),15) search_data , to_char(id,'99') || '-' || rpad('x',100,'x') padded_data from ( select rownum id from ( select null from dual connect by level <= &level_1 ) a, ( select null from dual connect by level <= &level_2 ) b ) / create index validate_me_idx1 on validate_me(id,search_data); exec dbms_stats.gather_table_stats(user,'VALIDATE_ME',method_opt => 'for all columns size 1') [/sourcecode]
Let's see just what the clustering factor is for this index. The following script
cluster-factor.sql will get this information for us.
[sourcecode language="sql"] col v_tablename new_value v_tablename noprint col v_owner new_value v_owner noprint col table_name format a20 head 'TABLE NAME' col index_name format a20 head 'INDEX NAME' col index_rows format 9,999,999,999 head 'INDEX ROWS' col table_rows format 9,999,999,999 head 'TABLE ROWS' col clustering_factor format 9,999,999,999 head 'CLUSTERING|FACTOR' col leaf_blocks format 99,999,999 head 'LEAF|BLOCKS' col table_blocks format 99,999,999 head 'TABLE|BLOCKS' prompt prompt Owner: prompt set term off feed off verify off select upper('&1') v_owner from dual; set term on feed on prompt prompt Table: prompt set term off feed off verify off select upper('&2') v_tablename from dual; set term on feed on select t.table_name , t.num_rows table_rows , t.blocks table_blocks , i.index_name , t.num_rows index_rows , i.leaf_blocks , clustering_factor from all_tables t join all_indexes i on i.table_owner = t.owner and i.table_name = t.table_name where t.owner = '&v_owner' and t.table_name = '&v_tablename' / undef 1 2 [/sourcecode]
Output from the script:
[sourcecode language="sql"] SQL> @cluster-factor jkstill validate_me Owner: Table: TABLE LEAF CLUSTERING TABLE NAME TABLE ROWS BLOCKS INDEX NAME INDEX ROWS BLOCKS FACTOR -------------------- -------------- ----------- -------------------- -------------- ----------- -------------- VALIDATE_ME 10,000,000 164,587 VALIDATE_ME_IDX1 10,000,000 45,346 10,160,089 1 row selected. Elapsed: 00:00:00.05 [/sourcecode]
On my test system creating the table for 1E7 rows required about 2 minutes and 15 seconds, while creating the index took 28 seconds. You may be surprised at just how long it takes to analyze that index.
[sourcecode language="sql"] SQL> analyze index jkstill.validate_me_idx1 validate structure online; Index analyzed. Elapsed: 00:46:06.49 [/sourcecode]
Prior to executing this command a 10046 trace had been enabled, so there is a record of how Oracle spent its time on this command. If you are wondering how much of the 46 minutes was consumed by the tracing and writing the trace file, it was about 6 minutes:
[sourcecode language="bash"] $> grep "WAIT #48004569509552: nam='db file sequential read'"; oravm1_ora_2377_VALIDATE.trc | awk '{ x=x+$8 } END { printf ("%3.2f\n",x/1000000/60) }' 40.74 [/sourcecode]
A Well Ordered Table
Now lets see how index analyze validate structure performs when the table is well ordered. The table uses the DDL as seen in the previous example, but rather than use dbms_random to generate the ID column, the table is created with the rows loaded in ID order. This is done by uncommenting id in the DDL and commenting out the call to dbms_random.
[sourcecode language="sql"] SQL> analyze index jkstill.validate_me_idx1 validate structure online; Index analyzed. Elapsed: 00:01:40.53 [/sourcecode]
That was a lot faster than previous. 1 minute and 40 seconds whereas previously the same command ran for 40 minutes. Using some simple command line tools we can see how many times each block was visited. First find the cursors and verify this cursor used only once in the session
[sourcecode language="bash"] $> grep -B1 '^analyze index' oravm1_ora_19987_VALIDATE.trc PARSING IN CURSOR #47305432305952 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462922977143796 hv=2128321230 ad='b69cfe10' sqlid='318avy9zdr6qf' analyze index jkstill.validate_me_idx1 validate structure online $> grep -nA1 'PARSING IN CURSOR #47305432305952' oravm1_ora_19987_VALIDATE.trc 63:PARSING IN CURSOR #47305432305952 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462922977143796 hv=2128321230 ad='b69cfe10' sqlid='318avy9zdr6qf' 64-analyze index jkstill.validate_me_idx1 validate structure online -- 276105:PARSING IN CURSOR #47305432305952 len=55 dep=0 uid=90 oct=42 lid=90 tim=1462923077576482 hv=2217940283 ad='0' sqlid='06nvwn223659v' 276106-alter session set events '10046 trace name context off' [/sourcecode]
As this cursor was reused, we need to limit the lines we considered from the trace file. One wait line appears like this: WAIT #47305432305952: nam='db file sequential read' ela= 317 file#=8 block#=632358 blocks=1 obj#=335456 tim=1462923043050233 As it is already known the entire table resides in one file, it is not necessary to check the file. From the following command it is clear that no block was read more than once during the analyze index validate structure when the table was well ordered in relation to the index.
[sourcecode language="bash"] $> tail -n +64 oravm1_ora_19987_VALIDATE.trc| head -n +$((276105-64)) | grep "WAIT #47305432305952: nam='db file sequential read'" | awk '{ print $10 }' | awk -F= '{ print $2 }' | sort | uniq -c | sort -n | tail 1 742993 1 742994 1 742995 1 742996 1 742997 1 742998 1 742999 1 743000 1 743001 1 743002 [/sourcecode]
That command line may look a little daunting, but it is really not difficult when each bit is considered separately. From the grep command that searched for cursors we know that the cursor we are interested in first appeared at line 64 in the trace file.
tail -n +64 oravm1_ora_19987_VALIDATE.trc The cursor was reused at line 276105, so tell the tail command to output only the lines up to that point in the file.
head -n +$((276105-64)) The interesting information in this case is for
'db file sequential read' on the cursor of interest.
grep "WAIT #47305432305952: nam='db file sequential read'" Next awk is used to output the
block=N portion of each line.
awk '{ print $10 }' awk is again used, but this time to split the block=N output at the '=' operator, and output only the block number.
awk -F= '{ print $2 }' The cut command could have been used here as well. eg.
cut -d= -f2 Sort the block numbers
sort Use the
uniq command to get a count of how many times each value appears in the output.
uniq -c Use sort -n to sort the output from uniq. If there are any counts greater than 1, they will appear at the end of the output.
sort -n And pipe the output through tail. We only care if any block was read more than once.
tail Now for the same procedure on the trace file generated from the poorly ordered table.
[sourcecode language="bash"] [/sourcecode]
[sourcecode language="bash"] $> grep -B1 '^analyze index' oravm1_ora_2377_VALIDATE.trc PARSING IN CURSOR #48004569509552 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462547433220254 hv=2128321230 ad='aad620f0' sqlid='318avy9zdr6qf' analyze index jkstill.validate_me_idx1 validate structure online $> grep -nA1 'PARSING IN CURSOR #48004569509552' oravm1_ora_2377_VALIDATE.trc 51:PARSING IN CURSOR #48004569509552 len=64 dep=0 uid=90 oct=63 lid=90 tim=1462547433220254 hv=2128321230 ad='aad620f0' sqlid='318avy9zdr6qf' 52-analyze index jkstill.validate_me_idx1 validate structure online -- 6076836:PARSING IN CURSOR #48004569509552 len=55 dep=0 uid=90 oct=42 lid=90 tim=1462550199668869 hv=2217940283 ad='0' sqlid='06nvwn223659v' 6076837-alter session set events '10046 trace name context off' [/sourcecode]
The top 30 most active blocks were each read 53 or more times when the table was not well ordered in relation to the index.
[sourcecode language="bash"] $> tail -n +51 oravm1_ora_2377_VALIDATE.trc | head -n +$((6076836-51)) | grep "WAIT #48004569509552: nam='db file sequential read'" | awk '{ print $10 }' | awk -F= '{ print $2 }' | sort | uniq -c | sort -n | tail -30 53 599927 53 612399 53 613340 53 633506 53 640409 53 644099 53 649054 53 659198 53 659620 53 662600 53 669176 53 678119 53 682177 53 683409 54 533294 54 533624 54 537977 54 549041 54 550178 54 563206 54 568045 54 590132 54 594809 54 635330 55 523616 55 530064 55 532693 55 626066 55 638284 55 680250 [/sourcecode]
Use RMAN
There is a feature of RMAN that allows checking for logical and physical corruption of an Oracle database via the command backup check logical validate database. This command does not actually create a backup, but just reads the database looking for corrupt blocks. Following is an (edited) execution of running this command on the same database where the analyze index commands were run. A portion of the block corruption report is included.
[sourcecode language="sql" padlinenumbers="true"] RMAN> backup check logical validate database; 2> Starting backup at 06-MAY-16 allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=29 instance=oravm1 device type=DISK channel ORA_DISK_1: starting full datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00008 name=+DATA/oravm/datafile/alloctest_a.273.789580415 input datafile file number=00009 name=+DATA/oravm/datafile/alloctest_u.272.789582305 input datafile file number=00024 name=+DATA/oravm/datafile/swingbench.375.821472595 input datafile file number=00023 name=+DATA/oravm/datafile/swingbench.374.821472577 input datafile file number=00019 name=+DATA/oravm/datafile/bh08.281.778786819 input datafile file number=00002 name=+DATA/oravm/datafile/sysaux.257.770316147 input datafile file number=00004 name=+DATA/oravm/datafile/users.259.770316149 input datafile file number=00001 name=+DATA/oravm/datafile/system.256.770316143 input datafile file number=00011 name=+DATA/oravm/datafile/alloctest_m.270.801310167 input datafile file number=00021 name=+DATA/oravm/datafile/ggs_data.317.820313833 input datafile file number=00006 name=+DATA/oravm/datafile/undotbs2.265.770316553 input datafile file number=00026 name=+DATA/oravm/datafile/undotbs1a.667.850134899 input datafile file number=00005 name=+DATA/oravm/datafile/example.264.770316313 input datafile file number=00014 name=+DATA/oravm/datafile/bh03.276.778786795 input datafile file number=00003 name=+DATA/oravm/datafile/rcat.258.861110361 input datafile file number=00012 name=+DATA/oravm/datafile/bh01.274.778786785 input datafile file number=00013 name=+DATA/oravm/datafile/bh02.275.778786791 input datafile file number=00022 name=+DATA/oravm/datafile/ccdata.379.821460707 input datafile file number=00007 name=+DATA/oravm/datafile/hdrtest.269.771846069 input datafile file number=00010 name=+DATA/oravm/datafile/users.271.790861829 input datafile file number=00015 name=+DATA/oravm/datafile/bh04.277.778786801 input datafile file number=00016 name=+DATA/oravm/datafile/bh05.278.778786805 input datafile file number=00017 name=+DATA/oravm/datafile/bh06.279.778786809 input datafile file number=00018 name=+DATA/oravm/datafile/bh07.280.778786815 input datafile file number=00020 name=+DATA/oravm/datafile/bh_legacy.282.778787059 input datafile file number=00025 name=+DATA/oravm/datafile/baseline_dat.681.821717827 input datafile file number=00027 name=+DATA/oravm/datafile/sqlt.668.867171675 input datafile file number=00028 name=+DATA/oravm/datafile/bh05.670.878914399 channel ORA_DISK_1: backup set complete, elapsed time: 00:25:27 List of Datafiles ================= File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 1 OK 0 75632 256074 375655477 File Name: +DATA/oravm/datafile/system.256.770316143 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 158478 Index 0 17160 Other 0 4730 File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------- 2 OK 0 36332 394240 375655476 File Name: +DATA/oravm/datafile/sysaux.257.770316147 Block Type Blocks Failing Blocks Processed ---------- -------------- ---------------- Data 0 170007 Index 0 138603 Other 0 49298 [/sourcecode]
As shown in the report, only 25 minutes were required to check the entire database for physically or logically corrupt blocks, as opposed to the 40 minutes needed to
analyze index validate structure. While the RMAN corruption check is not the same as the check performed by
analyze index validate structure, it is a test that can be completed in a much more timely manner, particularly if some indexes are both large and have a high value for the clustering factor.
Rebuild the Index?
If you have strong suspicions that a large index with an unfavorable clustering factor has corrupt blocks, it may be more expedient to just rebuild the index. If the database is on Oracle Enterprise Edition, the rebuild can also be done with the ONLINE option. Consider again the index on the test table with 1E7 rows. Creating the index required 28 seconds, while validating the structure required 40 minutes.
[sourcecode language="bash"] SQL> alter index validate_me_idx1 rebuild online; Index altered. Elapsed: 00:00:59.88 [/sourcecode]
The conclusion is quite clear; the use of
analyze index validate structure needs to be carefully considered when its use it contemplated for large indexes. The use of this command could be very resource intensive and take quite some time to complete. It is worthwhile to consider alternatives that my be much less resource intensive and time consuming.
Share this
- Technical Track (967)
- Oracle (413)
- MySQL (141)
- Cloud (128)
- Microsoft SQL Server (117)
- Open Source (90)
- Google Cloud (81)
- Microsoft Azure (63)
- Amazon Web Services (AWS) (58)
- Big Data (52)
- Google Cloud Platform (46)
- Cassandra (44)
- DevOps (41)
- Pythian (33)
- Linux (30)
- Database (26)
- Performance (25)
- Podcasts (25)
- Site Reliability Engineering (25)
- PostgreSQL (24)
- Oracle E-Business Suite (23)
- Oracle Database (22)
- Docker (21)
- DBA (20)
- Security (20)
- Exadata (18)
- MongoDB (18)
- Oracle Cloud Infrastructure (OCI) (18)
- Oracle Exadata (18)
- Automation (17)
- Hadoop (16)
- Oracleebs (16)
- Amazon RDS (15)
- Ansible (15)
- Snowflake (15)
- ASM (13)
- Artificial Intelligence (AI) (13)
- BigQuery (13)
- Replication (13)
- Advanced Analytics (12)
- Data (12)
- GenAI (12)
- Kubernetes (12)
- LLM (12)
- Authentication, SSO and MFA (11)
- Cloud Migration (11)
- Machine Learning (11)
- Rman (11)
- Datascape Podcast (10)
- Monitoring (10)
- Oracle Applications (10)
- Apache Cassandra (9)
- ChatGPT (9)
- Data Guard (9)
- Infrastructure (9)
- Python (9)
- Series (9)
- AWR (8)
- High Availability (8)
- Oracle EBS (8)
- Oracle Enterprise Manager (OEM) (8)
- Percona (8)
- Apache Beam (7)
- Data Governance (7)
- Innodb (7)
- Microsoft Azure SQL Database (7)
- Migration (7)
- Myrocks (7)
- Performance Tuning (7)
- Data Enablement (6)
- Data Visualization (6)
- Database Performance (6)
- Oracle Enterprise Manager (6)
- Orchestrator (6)
- RocksDB (6)
- Serverless (6)
- Azure Data Factory (5)
- Azure Synapse Analytics (5)
- Covid-19 (5)
- Disaster Recovery (5)
- Generative AI (5)
- Google BigQuery (5)
- Mariadb (5)
- Microsoft (5)
- Scala (5)
- Windows (5)
- Xtrabackup (5)
- Airflow (4)
- Analytics (4)
- Apex (4)
- Cloud Security (4)
- Cloud Spanner (4)
- CockroachDB (4)
- Data Management (4)
- Data Pipeline (4)
- Data Security (4)
- Data Strategy (4)
- Database Administrator (4)
- Database Management (4)
- Database Migration (4)
- Dataflow (4)
- Fusion Middleware (4)
- Google (4)
- Oracle Autonomous Database (Adb) (4)
- Oracle Cloud (4)
- Prometheus (4)
- Redhat (4)
- Slob (4)
- Ssl (4)
- Terraform (4)
- Amazon Relational Database Service (Rds) (3)
- Apache Kafka (3)
- Apexexport (3)
- Aurora (3)
- Business Intelligence (3)
- Cloud Armor (3)
- Cloud Database (3)
- Cloud FinOps (3)
- Cosmos Db (3)
- Data Analytics (3)
- Data Integration (3)
- Database Monitoring (3)
- Database Troubleshooting (3)
- Database Upgrade (3)
- Databases (3)
- Dataops (3)
- Digital Transformation (3)
- ERP (3)
- Google Chrome (3)
- Google Cloud Sql (3)
- Google Workspace (3)
- Graphite (3)
- Heterogeneous Database Migration (3)
- Liquibase (3)
- Oracle Data Guard (3)
- Oracle Live Sql (3)
- Oracle Rac (3)
- Perl (3)
- Rdbms (3)
- Remote Teams (3)
- S3 (3)
- SAP (3)
- Tensorflow (3)
- Adf (2)
- Adop (2)
- Amazon Data Migration Service (2)
- Amazon Ec2 (2)
- Amazon S3 (2)
- Apache Flink (2)
- Ashdump (2)
- Atp (2)
- Autonomous (2)
- Awr Data Mining (2)
- Cloud Cost Optimization (2)
- Cloud Data Fusion (2)
- Cloud Hosting (2)
- Cloud Infrastructure (2)
- Cloud Shell (2)
- Cloud Sql (2)
- Conferences (2)
- Cosmosdb (2)
- Cost Management (2)
- Cyber Security (2)
- Data Analysis (2)
- Data Discovery (2)
- Data Engineering (2)
- Data Migration (2)
- Data Modeling (2)
- Data Quality (2)
- Data Streaming (2)
- Data Warehouse (2)
- Database Consulting (2)
- Database Migrations (2)
- Dataguard (2)
- Docker-Composer (2)
- Enterprise Data Platform (EDP) (2)
- Etl (2)
- Events (2)
- Gemini (2)
- Health Check (2)
- Infrastructure As Code (2)
- Innodb Cluster (2)
- Innodb File Structure (2)
- Innodb Group Replication (2)
- NLP (2)
- Neo4J (2)
- Nosql (2)
- Open Source Database (2)
- Oracle Datase (2)
- Oracle Extended Manager (Oem) (2)
- Oracle Flashback (2)
- Oracle Forms (2)
- Oracle Installation (2)
- Oracle Io Testing (2)
- Podcast (2)
- Power Bi (2)
- Redshift (2)
- Remote DBA (2)
- Remote Sre (2)
- SAP HANA Cloud (2)
- Single Sign-On (2)
- Webinars (2)
- X5 (2)
- Actifio (1)
- Adf Custom Email (1)
- Adrci (1)
- Advanced Data Services (1)
- Afd (1)
- Ahf (1)
- Alloydb (1)
- Amazon (1)
- Amazon Athena (1)
- Amazon Aurora Backtrack (1)
- Amazon Efs (1)
- Amazon Redshift (1)
- Amazon Sagemaker (1)
- Amazon Vpc Flow Logs (1)
- Analysis (1)
- Analytical Models (1)
- Anisble (1)
- Anthos (1)
- Apache (1)
- Apache Nifi (1)
- Apache Spark (1)
- Application Migration (1)
- Ash (1)
- Asmlib (1)
- Atlas CLI (1)
- Awr Mining (1)
- Aws Lake Formation (1)
- Azure Data Lake (1)
- Azure Data Lake Analytics (1)
- Azure Data Lake Store (1)
- Azure Data Migration Service (1)
- Azure OpenAI (1)
- Azure Sql Data Warehouse (1)
- Batches In Cassandra (1)
- Business Insights (1)
- Chown (1)
- Chrome Security (1)
- Cloud Browser (1)
- Cloud Build (1)
- Cloud Consulting (1)
- Cloud Data Warehouse (1)
- Cloud Database Management (1)
- Cloud Dataproc (1)
- Cloud Foundry (1)
- Cloud Manager (1)
- Cloud Networking (1)
- Cloud SQL Replica (1)
- Cloud Scheduler (1)
- Cloud Services (1)
- Cloud Strategies (1)
- Compliance (1)
- Conversational AI (1)
- DAX (1)
- Data Analytics Platform (1)
- Data Box (1)
- Data Classification (1)
- Data Cleansing (1)
- Data Encryption (1)
- Data Estate (1)
- Data Flow Management (1)
- Data Insights (1)
- Data Integrity (1)
- Data Lake (1)
- Data Leader (1)
- Data Lifecycle Management (1)
- Data Lineage (1)
- Data Masking (1)
- Data Mesh (1)
- Data Migration Assistant (1)
- Data Migration Service (1)
- Data Mining (1)
- Data Monetization (1)
- Data Policy (1)
- Data Profiling (1)
- Data Protection (1)
- Data Retention (1)
- Data Safe (1)
- Data Sheets (1)
- Data Summit (1)
- Data Vault (1)
- Data Warehouse Modernization (1)
- Database Auditing (1)
- Database Consultant (1)
- Database Link (1)
- Database Modernization (1)
- Database Provisioning (1)
- Database Provisioning Failed (1)
- Database Replication (1)
- Database Scaling (1)
- Database Schemas (1)
- Database Security (1)
- Databricks (1)
- Datascape 59 (1)
- DeepSeek (1)
- Duet AI (1)
- Edp (1)
- Gcp Compute (1)
- Gcp-Spanner (1)
- Global Analytics (1)
- Google Analytics (1)
- Google Cloud Architecture Framework (1)
- Google Cloud Data Services (1)
- Google Cloud Partner (1)
- Google Cloud Spanner (1)
- Google Cloud VMware Engine (1)
- Google Compute Engine (1)
- Google Dataflow (1)
- Google Datalab (1)
- Google Grab And Go (1)
- Graph Algorithms (1)
- Graph Databases (1)
- Graph Inferences (1)
- Graph Theory (1)
- GraphQL (1)
- Healthcheck (1)
- Information (1)
- Infrastructure As A Code (1)
- Innobackupex (1)
- Innodb Concurrency (1)
- Innodb Flush Method (1)
- It Industry (1)
- Kubeflow (1)
- LMSYS Chatbot Arena (1)
- Linux Host Monitoring (1)
- Linux Storage Appliance (1)
- Looker (1)
- MMLU (1)
- Managed Services (1)
- Migrate (1)
- Migrating Ssis Catalog (1)
- Migration Checklist (1)
- MongoDB Atlas (1)
- MongoDB Compass (1)
- Newsroom (1)
- Nifi (1)
- OPEX (1)
- ORAPKI (1)
- Odbcs (1)
- Odbs (1)
- On-Premises (1)
- Ora-01852 (1)
- Ora-7445 (1)
- Oracle Cursor (1)
- Oracle Database Appliance (1)
- Oracle Database Se2 (1)
- Oracle Database Standard Edition 2 (1)
- Oracle Database Upgrade (1)
- Oracle Database@Google Cloud (1)
- Oracle Exadata Smart Scan (1)
- Oracle Licensing (1)
- Oracle Linux Virtualization Manager (1)
- Oracle Oda (1)
- Oracle Openworld (1)
- Oracle Parallelism (1)
- Oracle RMAN (1)
- Oracle Rdbms (1)
- Oracle Real Application Clusters (1)
- Oracle Reports (1)
- Oracle Security (1)
- Oracle Wallet (1)
- PDB (1)
- Perfomrance (1)
- Performance Schema (1)
- Policy (1)
- Prompt Engineering (1)
- Public Cloud (1)
- Pythian News (1)
- Rdb (1)
- Replication Compatibility (1)
- Replication Error (1)
- Retail (1)
- Scaling Ir (1)
- Securing Sql Server (1)
- Security Compliance (1)
- Serverless Computing (1)
- Sso (1)
- Tenserflow (1)
- Teradata (1)
- Vertex AI (1)
- Vertica (1)
- Videos (1)
- Workspace Security (1)
- Xbstream (1)
- July 2025 (3)
- June 2025 (1)
- May 2025 (3)
- March 2025 (2)
- February 2025 (1)
- January 2025 (2)
- December 2024 (1)
- October 2024 (2)
- September 2024 (7)
- August 2024 (4)
- July 2024 (2)
- June 2024 (6)
- May 2024 (3)
- April 2024 (2)
- February 2024 (1)
- January 2024 (11)
- December 2023 (10)
- November 2023 (11)
- October 2023 (10)
- September 2023 (8)
- August 2023 (6)
- July 2023 (2)
- June 2023 (13)
- May 2023 (4)
- April 2023 (6)
- March 2023 (10)
- February 2023 (6)
- January 2023 (5)
- December 2022 (10)
- November 2022 (10)
- October 2022 (10)
- September 2022 (13)
- August 2022 (16)
- July 2022 (12)
- June 2022 (13)
- May 2022 (11)
- April 2022 (4)
- March 2022 (5)
- February 2022 (4)
- January 2022 (14)
- December 2021 (16)
- November 2021 (11)
- October 2021 (6)
- September 2021 (11)
- August 2021 (6)
- July 2021 (9)
- June 2021 (4)
- May 2021 (8)
- April 2021 (16)
- March 2021 (16)
- February 2021 (6)
- January 2021 (12)
- December 2020 (12)
- November 2020 (17)
- October 2020 (11)
- September 2020 (10)
- August 2020 (11)
- July 2020 (13)
- June 2020 (6)
- May 2020 (9)
- April 2020 (18)
- March 2020 (21)
- February 2020 (13)
- January 2020 (15)
- December 2019 (10)
- November 2019 (11)
- October 2019 (12)
- September 2019 (16)
- August 2019 (15)
- July 2019 (10)
- June 2019 (16)
- May 2019 (20)
- April 2019 (21)
- March 2019 (14)
- February 2019 (18)
- January 2019 (18)
- December 2018 (5)
- November 2018 (16)
- October 2018 (12)
- September 2018 (20)
- August 2018 (27)
- July 2018 (31)
- June 2018 (34)
- May 2018 (28)
- April 2018 (27)
- March 2018 (17)
- February 2018 (8)
- January 2018 (20)
- December 2017 (14)
- November 2017 (4)
- October 2017 (1)
- September 2017 (3)
- August 2017 (5)
- July 2017 (4)
- June 2017 (2)
- May 2017 (7)
- April 2017 (7)
- March 2017 (8)
- February 2017 (8)
- January 2017 (5)
- December 2016 (3)
- November 2016 (4)
- October 2016 (8)
- September 2016 (9)
- August 2016 (10)
- July 2016 (9)
- June 2016 (8)
- May 2016 (13)
- April 2016 (16)
- March 2016 (13)
- February 2016 (11)
- January 2016 (6)
- December 2015 (11)
- November 2015 (11)
- October 2015 (5)
- September 2015 (16)
- August 2015 (4)
- July 2015 (1)
- June 2015 (3)
- May 2015 (6)
- April 2015 (5)
- March 2015 (5)
- February 2015 (4)
- January 2015 (3)
- December 2014 (7)
- October 2014 (4)
- September 2014 (6)
- August 2014 (6)
- July 2014 (16)
- June 2014 (7)
- May 2014 (6)
- April 2014 (5)
- March 2014 (4)
- February 2014 (10)
- January 2014 (6)
- December 2013 (8)
- November 2013 (12)
- October 2013 (9)
- September 2013 (6)
- August 2013 (7)
- July 2013 (9)
- June 2013 (7)
- May 2013 (7)
- April 2013 (4)
- March 2013 (7)
- February 2013 (4)
- January 2013 (4)
- December 2012 (6)
- November 2012 (8)
- October 2012 (9)
- September 2012 (3)
- August 2012 (5)
- July 2012 (5)
- June 2012 (7)
- May 2012 (11)
- April 2012 (1)
- March 2012 (8)
- February 2012 (1)
- January 2012 (6)
- December 2011 (8)
- November 2011 (5)
- October 2011 (9)
- September 2011 (6)
- August 2011 (4)
- July 2011 (1)
- June 2011 (1)
- May 2011 (5)
- April 2011 (2)
- February 2011 (2)
- January 2011 (2)
- December 2010 (1)
- November 2010 (7)
- October 2010 (3)
- September 2010 (8)
- August 2010 (2)
- July 2010 (4)
- June 2010 (7)
- May 2010 (2)
- April 2010 (1)
- March 2010 (3)
- February 2010 (3)
- January 2010 (2)
- November 2009 (6)
- October 2009 (6)
- August 2009 (3)
- July 2009 (3)
- June 2009 (3)
- May 2009 (2)
- April 2009 (8)
- March 2009 (6)
- February 2009 (4)
- January 2009 (3)
- November 2008 (3)
- October 2008 (7)
- September 2008 (6)
- August 2008 (9)
- July 2008 (9)
- June 2008 (9)
- May 2008 (9)
- April 2008 (8)
- March 2008 (4)
- February 2008 (3)
- January 2008 (3)
- December 2007 (2)
- November 2007 (7)
- October 2007 (1)
- August 2007 (4)
- July 2007 (3)
- June 2007 (8)
- May 2007 (4)
- April 2007 (2)
- March 2007 (2)
- February 2007 (5)
- January 2007 (8)
- December 2006 (1)
- November 2006 (3)
- October 2006 (4)
- September 2006 (3)
- July 2006 (1)
- May 2006 (2)
- April 2006 (1)
- July 2005 (1)
No Comments Yet
Let us know what you think