Feed aggregator

Oracle Integration Cloud Services (OIC)1Z0-1042 Training: Step By Step Activity Guides/Hands-On Lab Exercise

Online Apps DBA - Sun, 2020-02-09 05:43

Find out how to become an Oracle Certified Integration Expert [1Z0-1042] by performing the Hands-On Lab Exercises at https://k21academy.com/oic05 & get in-depth, Stepwise Study for: • Hands-On Activity Guides that you must perform in order to learn Oracle Integration Cloud Service (OIC) • How to bring you and your team for integration with on-premise & […]

The post Oracle Integration Cloud Services (OIC)1Z0-1042 Training: Step By Step Activity Guides/Hands-On Lab Exercise appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[1Z0-997]Oracle Cloud Infrastructure (OCI) Architect Professional Certification: Step by Step Hands-On Lab To Clear Exam

Online Apps DBA - Sun, 2020-02-09 04:10

1Z0-997 is next level (Professional) Architect Certification after Associate (1Z0-1072) Check Hands-on Activity Guides/ Labs you must perform to learn, implement and clear Oracle OCI professional exam [1Z0-997]. Check out k21 Academy’s blog post at https://k21academy.com/1z099705 to know our Hands-On Labs in depth. The blog post comes with: •The Learning Path for Oracle OCI Architect […]

The post [1Z0-997]Oracle Cloud Infrastructure (OCI) Architect Professional Certification: Step by Step Hands-On Lab To Clear Exam appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[AZ-103] Microsoft Azure Administrator Training: Step By Step Activity Guides/Hands-On Lab Exercise

Online Apps DBA - Sun, 2020-02-09 00:04

To clear any certification, it is important to perform the right Hands-On Labs. Check out K21Academy’s blog post at https://k21academy.com/az10305 tell you about the Hands-On Labs you must perform to clear the [AZ-103] exam with high score & less trouble. Begin your journey towards becoming a [AZ-103] Microsoft Azure Administrator Training and earn a lot […]

The post [AZ-103] Microsoft Azure Administrator Training: Step By Step Activity Guides/Hands-On Lab Exercise appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Running SQL Server on the Oracle Free tier

Yann Neuhaus - Sat, 2020-02-08 16:25
By Franck Pachot

The Oracle Cloud is not only for Oracle Database. You can create a VM running Oracle Linux with full root access to it, even in the free tier: a free VM that will be always up, never expires, with full ssh connectivity to a sudoer user, where you are able to tunnel any port. Of course, there are some limits that I’ve detailed in a previous post. But that is sufficient to run a database, given that you configure a low memory usage. For Oracle Database XE, Kamil Stawiarski mentions that you can just hack the memory test in the RPM shell script.
But for Microsoft SQL Server, that’s a bit more complex because this test is hardcoded in the sqlservr binary and the solution I propose here is to intercept the call to the sysinfo() system call.

Creating a VM in the Oracle Cloud is very easy, here are the steps in one picture:

I’m connecting to the public IP Address with ssh (the public key is uploaded when creating the VM) and I’ll will run everything as root:

ssh opc@129.213.138.34
sudo su -
cat /etc/oracle-release

I install docker engine (version 19.3 there)
yum install -y docker-engine

I start docker

systemctl start docker
docker info


I’ll use the latest SQL Server 2019 image built on RHEL
docker pull mcr.microsoft.com/mssql/rhel/server:2019-latest
docker images

5 minutes to download a 1.5GB image. Now trying to start it.
The nice thing (when I compare to Oracle) is that we don’t have to manually accept the license terms with a click-through process. I just mention that I have read and accepted them with: ACCEPT_EULA=Y 

I try to run it:
docker run \
-e "ACCEPT_EULA=Y" \
-e 'MSSQL_PID=Express' \
-p 1433:1433 \
-e 'SA_PASSWORD=**P455w0rd**' \
--name mssql \
mcr.microsoft.com/mssql/rhel/server:2019-latest

There’s a hardcoded prerequisite verification to check that the system has at least 2000 MB of RAM. And I have less than one GB here in this free tier:


awk '/^Mem/{print $0,$2/1024" MB"}' /proc/meminfo

Fortunately, there’s always a nice geek on the internet with an awesome solution: hack the sysinfo() system call with a LD_PRELOAD’ed wrapper : A Slightly Liberated Microsoft SQL Server Docker image

Let’s get it:
git clone https://github.com/justin2004/mssql_server_tiny.git
cd mssql_server_tiny

I changed the FROM to build from the 2019 RHEL image and I preferred to use /etc/ld.so.preload rather than overriding the CMD command with LD_LIBRARY:


FROM oraclelinux:7-slim AS build0
WORKDIR /root
RUN yum update -y && yum install -y binutils gcc
ADD wrapper.c /root/
RUN gcc -shared -ldl -fPIC -o wrapper.so wrapper.c
FROM mcr.microsoft.com/mssql/rhel/server:2019-latest
COPY --from=build0 /root/wrapper.so /root/
ADD wrapper.c /root/
USER root
RUN echo "/root/wrapper.so" > /etc/ld.so.preload
USER mssql

I didn’t change the wrapper for the sysinfo function:
#define _GNU_SOURCE
#include
#include
#include
int sysinfo(struct sysinfo *info){
// clear it
//dlerror();
void *pt=NULL;
typedef int (*real_sysinfo)(struct sysinfo *info);
// we need the real sysinfo function address
pt = dlsym(RTLD_NEXT,"sysinfo");
//printf("pt: %x\n", *(char *)pt);
// call the real sysinfo system call
int real_return_val=((real_sysinfo)pt)(info);
// but then modify its returned totalram field if necessary
// because sqlserver needs to believe it has "2000 megabytes"
// physical memory
if( info->totalram totalram = 1000l * 1000l * 1000l * 2l ;
}
return real_return_val;
}

I build the image from there:

docker build -t mssql .


I run it:

docker run -d \
-e "ACCEPT_EULA=Y" \
-e 'MSSQL_PID=Express' \
-p 1433:1433 \
-e 'SA_PASSWORD=**P455w0rd**' \
--name mssql \
mssql

I wait until it is ready:

until docker logs mssql | grep -C10 "Recovery is complete." ; do sleep 1 ; done

All is ok and I connect and check the version:

Well… as you can see my first attempt failed. I am running with very low memory here, and then many memory allocation problems can be expected. If you look at the logs after a while, many automatic system tasks fail. But that’s sufficient for a minimal lab and you can tweak some Linux and SQL Server parameters if you need it. Comments are welcome here for feedback and ideas…

The port 1433 is exposed here locally and it can be tunneled through ssh. This is a free lab environment always accessible from everywhere to do small tests in MS SQL, running on the Oracle free tier. Here is how I connect with DBeaver from my laptop, just mentioning the public IP address, private ssh key and connection information:

Cet article Running SQL Server on the Oracle Free tier est apparu en premier sur Blog dbi services.

Tagging Overview & Concepts In Oracle Cloud (OCI)

Online Apps DBA - Sat, 2020-02-08 09:31

Tagging in Oracle Cloud Infrastructure (OCI) is used to provide metadata to the resources so that it is easier to list and organize resources present in multiple Compartments and for billing purposes (Cost tracking tags). Check out k21 Academy’s post at https://k21academy.com/oci60 that covers: • Overview of Tagging in OCI • Benefits of Using Tags […]

The post Tagging Overview & Concepts In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Fixed] opmnctl startall fails In EBS R12.2

Online Apps DBA - Sat, 2020-02-08 06:53

Did you encounter the “opmnctl startall fails In EBS R12” error after upgrading from EBS 12.2.0 to 12.2.8, while starting the Application Tier? If YES, then check out K21 Academy’s blog post at https://k21academy.com/appsdba64 that covers: • Issue Encountered • Root Cause • Fix Begin your journey towards becoming an Apps DBA Expert by joining […]

The post [Fixed] opmnctl startall fails In EBS R12.2 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Zero Downtime Migration In Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Sat, 2020-02-08 05:56

Oracle has recently launched a cloud migration utility named Zero Downtime Migration (ZDM), that will help you in migrating your On-Prem Oracle Database to Oracle Cloud Database (VM DB, ExaCS, Bare Metal, Exadata Cloud at Customer) With the aim of keeping the migration process as simple as possible and minimal downtime on production workloads. If […]

The post Zero Downtime Migration In Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

New Feature: Exadata (ExaCs) DB System: Shared Database Homes

Online Apps DBA - Sat, 2020-02-08 04:11

Oracle recently announced that now you can add Oracle Database Homes to an existing Exadata DB system by using the Oracle Cloud Infrastructure (OCI)Console, the API (Rest API), or the CLI (dbaascli). Check out K21 Academy’s blog post at https://k21academy.com/clouddba53 that covers: • What Is Oracle Database Home? • How To Manage DB Home In […]

The post New Feature: Exadata (ExaCs) DB System: Shared Database Homes appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

ORA-29279: SMTP permanent error: 550 XXXsmtpXX: Host xxx.xxx.xxx.xxx: No unauthenticated relaying permitted

Tom Kyte - Fri, 2020-02-07 18:03
Hi Tom, I want to send email through PL/SQL by using utl_mail package, I have completed below steps 1. Installed scripts <b>@$ORACLE_HOME/rdbms/admin/utlmail.sql @$ORACLE_HOME/rdbms/admin/prvtmail.plb</b> 2. granted execute to us...
Categories: DBA Blogs

how to load XML file into oracle table and using XML query to get result

Tom Kyte - Fri, 2020-02-07 18:03
Hi Tom, I have XML file from Web Report. there are some error records in the file, it is difficult to find the error in XML file, I want to load this file into oracle table. then use XML query. below is sample file and expected result. This is fir...
Categories: DBA Blogs

Should I go for ODA 19.5 or should I wait until 19.6?

Yann Neuhaus - Fri, 2020-02-07 05:25
Introduction

As you may know, Oracle Database 19c is available for new (X8-2) or older Oracle Database Appliances since several weeks. Current version is 19.5. But when you go to the official ODA documentation , it still first proposes version 18.7 not compatible with 19c databases. Here is why.

19c database is the final 12.2

First of all, 19c is an important release because it’s the terminal release of the 12.2, as 11.2.0.4 was for 11.2. Please refer to my other blog to understand the new Oracle versioning. ODA always supports new releases few months after being available on Linux, and it’s why it’s only available now.

Drawbacks of 19.5

19.5 is available on your ODA, but you will not be able to patch to this version. Reason is quite simple, it’s not a complete patch, you can only download ISO for reimaging and 19c grid and database software and that’s it. The reason for not yet having a patch resides in the difficulty of updating the OS part. 19.5 runs on Linux 7.7, and all previous releases are stuck with Linux 6.10, meaning that the patch should include the OS upgrade, and this jump is not so easy. It’s the first drawback.

Second drawback is that you cannot run another database version. If you still need 18c, 12.2, 12.1 or 11.2, this 19.5 is not for you.

The third drawback is that you will not be able to patch from 19.5 to 19.6 or newer version. Simply because 19.5 is an out of the way release.

Another drawback concerns the documentation not yet complete: many parts are copy/paste from 18.7. For example, described initcl command to restart the dcs agent is not a command that actually exists on Linux 7.

Moreover, my first tests on this version show annoying bugs related to database creation, those under investigation by Oracle.

When 19.6 will be ready?

19.6 is planned for 2020, yes but which month? There is no official date, it could come in march, or during the summer, nobody knows. As a result, you will have to wait for this patch to be released to start your migration to 19c on ODA.

So, what to do?

3 solutions are possible:

  • You can deal with your old databases until the patch is released: buy extended support for 11gR2/12cR1. Premier support is still OK for 12.2.0.1 and 18c
  • Migrate your old 11gR2 and 12cR1 to 18c to be prepared for 19c and avoid buying extended support, differences between 18c and 19c should be minimal
  • Deploy 19.5 for testing purpose on a test ODA and start your migration project to get prepared for 19.6. Once available, patch or redeploy your ODAs and migrate all your databases
Conclusion

Not having 19.6 now is really annoying. Afterall we choose ODA because it’s easier to get updates. But you can still prepare everything for 19c migration, by first migrate to 18c or give a try to 19c with this 19.5 release.

Cet article Should I go for ODA 19.5 or should I wait until 19.6? est apparu en premier sur Blog dbi services.

Antiques

Frank van Bortel - Wed, 2020-02-05 04:08
Apart from kind of being in the same position as I started blogging (recovering from surgery), I also ended my "productive" live and am enjoying a new status: pensionado. Cleaning up That status comes with a task of cleaning up, and I found some Oracle documentation, that might have antique value to some. Leave a comment when you are interested in one of the following items, and let me know Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

ROLLBACK TO SAVEPOINT;

Yann Neuhaus - Tue, 2020-02-04 14:07
By Franck Pachot

.
I love databases and, rather than trying to compare and rank them, I like to understand their difference. Sometimes, you make a mistake and encounter an error. Let’s take the following example:
create table DEMO (n int);
begin transaction;
insert into DEMO values (0);
select n "after insert" from DEMO;
update DEMO set n=1/n;
select n "after error" from DEMO;
commit;
select n "after commit" from DEMO;

The “begin transaction” is not valid syntax in all databases because transactions may be started implicitly, but the other statements are valid syntax in all the common SQL databases. They all raise an error in the update execution because there’s one row with N=0 and then we cannot calculate 1/N as it is a math error. But, what about the result of the last select?

If I run this with Oracle, DB2, MS SQL Server, My SQL (links go to example in db<>fiddle), the row added by the insert is always visible by my session: after the insert, of course, after the update error, and after the commit (then visible by everybody).

The same statements run with PostgreSQL have a different result. You cannot do anything after the error. Only rollback the transaction. Even if you “commit” it will rollback. 

Yes, no rows are remaining there! Same code but different result.

You can have the same behavior as the other databases by defining a savepoint before the statement, and rollback to savepoint after the error. Here is the db<>fiddle. With PostgreSQL you have to define an explicit savepoint if you want to continue in your transaction after the error. Other databases take an implicit savepoint. By the way, I said “statement” but here is Tanel Poder showing that in Oracle the transaction is actually not related to the statement but the user call: Oracle State Objects and Reading System State Dumps Hacking Session Video – Tanel Poder’s blog

In Oracle, you can run multiple statements in a user call with a PL/SQL block. With PostgreSQL, you can group multiple statements in one command but you can also run a PL/pgSQL block. And with both, you can catch errors in the exception block. And then, it is PostgreSQL that takes now an implicit savepoint as I explained in a previous post: PostgreSQL subtransactions, savepoints, and exception blocks

This previous post was on Medium ( you can read https://www.linkedin.com/pulse/technology-advocacy-why-i-am-still-nomad-blogger-franck-pachot/ where I explain my blog “nomadism”), but as you can see I’m back on the dbi-services blog for my 500th post there. 

My last post here was called “COMMIThttps://blog.dbi-services.com/commit/ where I explained that I was quitting consulting for CERN to start something new. But even if I decided to change, I was really happy at dbi-services (as I mentioned on a LinkedIn post about great places to work). And when people like to work together it creates an implicit SAVEPOINT where you can come back if you encounter some unexpected results. Yes… this far-fetched analogy just to mention that I’m happy to come back to dbi services and this is where I’ll blog again.

As with many analogies, it reaches the limits of the comparison very quickly. You do not ROLLBACK a COMMIT and it is not a real rollback because this year at CERN was a good experience. I’ve met great people there, learned interesting things about matter and anti-matter, and went out of my comfort zone like co-organizing a PostgreSQL meetup and inviting external people ( https://www.linkedin.com/pulse/working-consultants-only-externalization-franck-pachot/) for visits and conferences. 

This “rollback” is actually a step further, but back in the context I like: solve customer problems in a company that cares about its employees and customers. And I’m not exactly coming back at the same “savepoint”. I was mostly focused on Oracle and I’m now covering more technologies in the database ecosystem. Of course, consulting on Oracle Database will still be a major activity. But today, many other databases are raising: NoSQL, NewSQL… Open Source is more and more relevant. And in this jungle, the replication and federation technologies are raising. I’ll continue to share on these areas and you can follow this blog, the RSS feed, and/or my twitter account.

Cet article ROLLBACK TO SAVEPOINT; est apparu en premier sur Blog dbi services.

maxquerylen

Jonathan Lewis - Tue, 2020-02-04 09:50

The view v$undostat is a view holding summary information about undo activity that can be used by the automatic undo mechanism to deal with optimising the undo retention time (hence undo space allocation). The view holds one row for every ten minute interval in the last 4 days (96 hours) and includes two columns called maxquerylen and maxqueryid – which tell you something about the query that was considered to be the longest running query active in the interval.

In this note I want to explain why the contents of these two columns are sometimes (possibly often) completely irrelevant despite there being a few notes on the internet about how you should investigate them to help you decide on a suitable setting for the undo_retention.

The descriptions in the 19c reference manual for these columns is as follows:

  • maxquerylen – Identifies the length of the longest query (in seconds) executed in the instance during the period. You can use this statistic to estimate the proper setting of the UNDO_RETENTION initialization parameter. The length of a query is measured from the cursor open time to the last fetch/execute time of the cursor. Only the length of those cursors that have been fetched/executed during the period are reflected in the view.
  • maxqueryid – SQL identifier of the longest running SQL statement in the period

It would be hard to misunderstand the meaning of the second column – if the first column tells us that Oracle has spotted a “longest query” then the second column gives us the sql_id so we can check v$sql to find out what it was. But what sort of queries are going to show up as the “longest query” in the interval?

Here’s an example from a few hours of a nearly idle instance, querying the begin and end times (formatted to show just day of month + time), with the two critical columns,


select  begin_time, end_time, maxquerylen, maxqueryid 
from    v$undostat 
order by 
    begin_time
;

BEGIN_TIME	END_TIME    MAXQUERYLEN MAXQUERYID
--------------- ----------- ----------- -------------
...
04 10:50:18	04 11:00:18	      0
04 11:00:18	04 11:10:18	    356 f3yfg50ga0r8n
04 11:10:18	04 11:20:18	    883 25u1mbkcr9rnu
04 11:20:18	04 11:30:18	   1486 25u1mbkcr9rnu
04 11:30:18	04 11:40:18	   2090 25u1mbkcr9rnu
04 11:40:18	04 11:50:18	      0
04 11:50:18	04 12:00:18	   3299 25u1mbkcr9rnu
04 12:00:18	04 12:10:18	   3903 25u1mbkcr9rnu
04 12:10:18	04 12:20:18	   4507 25u1mbkcr9rnu
04 12:20:18	04 12:30:18	      0
04 12:30:18	04 12:40:18	      0
04 12:40:18	04 12:50:18	      0
04 12:50:18	04 13:00:18	      0
04 13:00:18	04 13:10:18	      0
04 13:10:18	04 13:20:18	      0
04 13:20:18	04 13:30:18	      0
04 13:30:18	04 13:37:27	   9035 25u1mbkcr9rnu

173 rows selected.

Notice, particularly, that the SQL_ID 25u1mbkcr9rnu disappears from the 11:40 interval, then re-appears at 11:50, then disappears again from 12:20 through 13:20 (lunchtime), then reappears again at 13:30. And when it reappears after an absence the query length has increased in a way that’s consistent with the gap. So it looks as if the query wasn’t running during the gap, but turns out to have been running after the gap ended. (Is this Quantum queries?)

The explanation is in the detail of the definition: “from the cursor open time to the last fetch/execute time”. From an SQL*Plus session I “set pause on” then executed the query “select * from all_objects” and hit return a few times to get a few pages of output. Then, spread over the next couple of hours, I hit return a few more times to get a few more pages of output. Each time I hit return my session does another fetch call, and the code behind v$undostat notices that my query is still active.

I don’t know exactly how Oracle is keeping track of “active” statements because there seem to be some inconsistencies in the reporting (and I’ll comment on those later), but as a first approximation, until you close a cursor (either explicitly or implicitly) some piece of Oracle’s code registers the fact that the query might do further fetches, which means it might need to apply more undo to make current data read-consistent with the cursor’s start SCN, which means that it should take note of the time the cursor has been open because the undo retention time might need to be that long.

Inconsistencies.

I said there were some inconsistencies in the reporting. I’ve noticed three anomalies – perhaps due to the extreme idleness of the instance I was using for testing.

  1. At about 12:45 I hit return a few times to get the maxquerylen and maxqueryid refreshed – but the code didn’t seem to notice that I had done a few more fetches of a long running query. So it seems to be possible for Oracle to miss the query that should be reported.
  2. At about 11:52 I hit return a few times again, and you can see that the two critical columns were refreshed, but if you do the arithmetic Oracle has attributed 600 seconds to the query – the whole of the 10 minute refresh interval, not just the time up to the moment of the last fetch that I had done in that interval.
  3. I didn’t hit return again until about 12:45 (the moment in point 1 above when the two columns didn’t refresh the way I though they should), but the columns kept updating through 12:00 and 12:10 intervals anyway before disappearing from the 12:20 interval. So it looks like queries can be reported as long running even when they haven’t been executing or fetching.

The is one last anomaly – and this relates to the reason I started looking at this columns. If you check the entry for 11:00 you’ll see that the SQL Id ‘f3yfg50ga0r8n’ has been reported as “running” for 356 seconds. But here’s what I found in v$sql for that query:


select  executions, parse_calls, fetches, end_of_fetch_count, elapsed_time, cpu_time, rows_processed, sql_text 
from    v$sql 
where   sql_id = 'f3yfg50ga0r8n'
;

EXECUTIONS PARSE_CALLS	  FETCHES END_OF_FETCH_COUNT ELAPSED_TIME   CPU_TIME ROWS_PROCESSED SQL_TEXT
---------- ----------- ---------- ------------------ ------------ ---------- -------------- ------------------------------------------
        79          79         79                  0        20487      10667             79 select obj# from obj$ where dataobj# = :1


The SQL lookes like a sys-recursive statement which, in 79 executions, has accumulated 20 milliseconds of elapsed time (rather than 356 seconds – but that difference could just be related to one or other of the anomalies I reported earlier). The key thing to note is that the value of column end_of_fetch_count is zero: this looks like a statement where Oracle has simply not bothered to fetch all the data and has then not bothered to close the cursor. As a result it’s possible that each time the statement is executed (note that parse_calls = executions, so we’re not looking at a “held” cursor) the code behind v$undostat looks back at the time the cursor was initially opened to measure the query length, rather than looking at the time the statement was re-executed.

This may go a long way to answering the question that came up on Oracle-l earlier on today as follows:

The following query (sql_id is 89w8y2pgn25yd) was recorded in v$undostat.maxqueryid in a 12.2. database during a period of a high undo usage: select ts# from sys.ts$ where ts$.online$ != 3 and bitand(flags,2048) != 2048;

select
        undoblks,txncount,maxquerylen,maxconcurrency,activeblks
from    v$undostat u
where   maxqueryid='89w8y2pgn25yd'
;

UNDOBLKS  TXNCOUNT  MAXQUERYLEN  MAXCONCURRENCY  ACTIVEBLKS
--------  --------  -----------  --------------  ----------
   39199      4027         1378               5     2531960

What is this query, and how does it manage to report a maximum query length of 1,378 seconds? Just like the one above it’s a sys-recursive query; and this one appears when you query dba_tablespaces – and though it executes once for every row and takes just fractions of a second to execute. But if you trace a query like “select tablespace_name from dba_tablespaces” you’ll see that every time the query is called the trace file will show lines for: “Parse, Exec, Fetch, Close” until the last call – which doesn’t report a “Close”.

Just like my original “select * from all_objects” there’s a query dangling with an open cursor, leaving Oracle with the opportunity to go back to the moment it was opened and report it as a “long running query”.

tl;dr

The maxquerylen and maxqueryid in v$undostat don’t tell you about statements that have taken a long time to change a lot of data and generate a lot of undo; they tell you about statements that might need to apply a lot of undo to see read-consistent data and therefore might become victims in a “ORA-01555: snapshot too old” event.

For various reasons the statements reported may be completely irrelevant because there are various reasons (some, possibly, bug-related) why a cursor may be treated as if it opened a long time in the past when it was actually a re-use of an existing “dangling” cursor. It’s also possible that some bad client code will treat cursor in a way that does no harm to the client program but hides a more interesting query that would otherwise have been reported by these two columns.

 

 

How to retrieve the 200millions of record into txt file using select query?

Tom Kyte - Tue, 2020-02-04 09:02
Hi Tom, We have 200+millions of records in our table, we have to retrieve all the records into text file without impacting the performance. I tried using DBMS_parallel_execute but this doesn?t have return type. Is there a way to get all Millions of...
Categories: DBA Blogs

Getting error -12842 : ORA-12842: CURSOR INVALIDATED DURING PARALLEL EXECUTION

Tom Kyte - Tue, 2020-02-04 09:02
Below is my scenario. We are getting error randomly in production. when we restart the process error disappears. we are not able to reproduce even in lower environment. Can you please help in letting us know the potential issue with insert statement ...
Categories: DBA Blogs

Truncate Multiple tables in a single line statement.

Tom Kyte - Tue, 2020-02-04 09:02
My question: Is it possible to truncate multiple tables in a single line statement? Firstly, thanks for giving this opportunity to ask a question. I have developed a code to truncate multiple tables in a single line statement. Please refer the ...
Categories: DBA Blogs

How to tune a create table that runs in loop

Tom Kyte - Tue, 2020-02-04 09:02
Hi All, can someone help me to tune this peice of code. It is taking 45 minutes to complete the execution as of now. We have tried taking off the loop and creating 5 different create statements but did gain just 2 minutes Each of the table have and...
Categories: DBA Blogs

How many Not guaranteed Restore Points can I use at given point

Tom Kyte - Tue, 2020-02-04 09:02
Hello! We have a development database where we won't do any backup of the database. Our method to recreate it will be rman duplicate from the production one. But we want to give the option to our development team to do a flashback of that database...
Categories: DBA Blogs

Archiving Using DBMS_DATAPUMP

Tom Kyte - Tue, 2020-02-04 09:02
I have a requirement to archive historical data, that is delete data from a number of tables that is more than 36 months old, but only a subset of data, for arguments sake lets call them quotes given to customers that have not been taken up. We curre...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator