Feed aggregator

PL/SQL Package with no DEFINER or INVOKER rights - Part 2

Pete Finnigan - Sat, 2020-02-01 12:01
I posted about a discovery I made whilst testing for an issue in our PL/SQL code analyser checks in PFCLScan last week as I discovered that the AUTHID column in DBA_PROCEDURES or ALL_PROCEDURES or USER_PROCEDURES can be NULL; this caused....[Read More]

Posted by Pete On 28/01/20 At 03:11 PM

Categories: Security Blogs

Oracle BPEL: What's up with it?

Dietrich Schroff - Fri, 2020-01-31 15:22
Nearly 20 years ago there was a new standard for modelling business processes: BPEL (https://en.wikipedia.org/wiki/Business_Process_Execution_Language)
Oracle bought Collaxa and released the Oracle BPEL Process Manager.
In 2009 i wrote some posts over tutorials, which where from collaxa and which were very useful for starting with Oracle BPEL product:
http://dietrichschroff.blogspot.com/search?q=lost+collaxa

And what is left today?

The product is still listed on Oracle homepage:

And there is a section with some "news":


But the version number 10.1.3.4 is not really somehing new and if you open the white paper you will see:

I think a whitepaper from 2014 is not really something new. 

If you take a look on the BPEL specification at oasis (http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html), you can see, that there was no update to the BPEL standard for more than 10 years. 

It is very sad, that this concept of modelling processes in a graphical way was unsuccessful...

Provision EBS R12 On Oracle Cloud From Cloud Marketplace (OCI)

Online Apps DBA - Fri, 2020-01-31 07:04

Provision EBS R12 On Oracle Cloud From Cloud Marketplace (OCI) Do you know that you can now provision EBS (R12) On Oracle Cloud (OCI) just by clicking a button? All thanks to the availability of the EBS (R12) image on Oracle Cloud Marketplace. Check out K21Academy’s blog post at https://k21academy.com/ebscloud37 that covers: •Different Ways To […]

The post Provision EBS R12 On Oracle Cloud From Cloud Marketplace (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Unshackles Stores from the Cash Wrap with Flexible POS Systems

Oracle Press Releases - Thu, 2020-01-30 08:00
Press Release
Oracle Unshackles Stores from the Cash Wrap with Flexible POS Systems New point of service hardware and software help retailers deliver a more engaging, full-service shopping experience with ultimate flexibility, form and function

Redwood Shores, Calif.—Jan 30, 2020

With 52 percent of shoppers reporting their favorite way to shop is still in store, retailers must continually push the boundaries of in-store experiences. For shoppers, convenience reigns supreme—they want to get in and out with the goods they were seeking, which can only happen when inventory is accessible and smart, connected associates are ready to help. Enabling brands to deliver on this promise, Oracle Retail has continued to evolve its Xstore Point-of-Service (POS) and Oracle hardware systems.  

The new Workstation 655 and 625 models feature a sleek, contemporary design that is built with a tough, but minimal footprint. The Workstation 6 models come preinstalled with the ‘Oracle Linux for MICROS’ operating system or Windows 10 IoT Enterprise. This enables retailers such as McColl’s or Burberry, to concentrate on their core mission of delivering the customer experience most aligned with their brand, without sacrificing scale or long-term growth.

“McColl’s performs more than four million customer transactions per week through 2,700 POS terminals and we need fast and reliable store systems to support our customers and store colleagues,” said Neil Hodge, information technology director, McColl’s. “We chose Oracle Retail Xstore Point-of-Service and the Oracle hardware because it is an adaptable solution capable of supporting our growth as operational requirements change.”

Oracle’s platform for modern retail allows retailers to select the format that best fits their unique brand image and customer expectations. This may be a fully mobile store, discrete fixed units or a combination, which is increasingly common. The form factor is just as important as having an integrated hardware and software point-of-service (POS) solutions that provide consistency across interfaces, features and functionality. This consistency ensures a seamless experience for both the consumer and the associate, helping reduce errors, increase speed and enhance the interaction with the inclusion of rich customer data.  

For example, using the customer data in Xstore on their mobile tablet, a store associate can guide a shopper to the newest shoe from their favorite brand, check inventory in the customer’s size and then complete the transaction right there in the shoe department. A seamless, convenient experience for both the shopper and associate.

As retailers explore new store formats and seek agility in their experiences, Oracle offers a hardware footprint engineered from the ground up including industry-leading peripherals, operating system, application servers, and database, all designed to run and enhance the Oracle Retail Xstore Point-of-Service experience right out of the box.

“Our latest hardware delivers a modern look and feel, and a more streamlined and resilient user-friendly footprint, which supports full feature store applications,” said Jeff Warren, vice president of strategy for Oracle Retail.  “Couple this with our hardware service offering and the result is an agile approach that enables retailers of all sizes to pick and choose the POS that is most brand-appropriate to their image, without sacrificing software features and functionality.”

The Oracle Retail Hardware portfolio is fully integrated with the Oracle Retail Xstore Point-of-Service and supporting omnichannel suite. The cohesive solution can provide retailers a wide range of benefits including faster implementation, lower total cost of ownership (TCO), a consistent user experience, and investment protection that will accommodate future upgrades.

Contact Info
Kaitlin Ambrogio
Oracle
+1 781-434-9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1 781-434-9555

Can AI-based Cybersecurity Render Human Error Extinct?

Oracle Press Releases - Thu, 2020-01-30 08:00
Blog
Can AI-based Cybersecurity Render Human Error Extinct?

By Greg Jensen, Senior Director of Cloud Security—Jan 30, 2020

Numbed by yet another year of data breaches–2,013 in 2019 alone by one estimate–it is no surprise researchers widely acknowledge human error as a top-source of cyber insecurity. C-Suite executives and policy makers called “human error” their top cybersecurity risk in Oracle’s Security in the Age of AI report, released last May.

Without change, the scale and frequency of data breaches will continue to increase. To reverse the trend we must better understand how exactly, people contribute to cyber-insecurity.

How human error creates problems

In December 2019, Oracle surveyed 300 enterprise-level IT decision makers nationwide to examine concerns around cybersecurity posture and potential solutions. The survey evaluated the state of cybersecurity preparedness and trust in employing AI to address security gaps at organizations with 500+ employees. We saw key red flags from patching protocols to workforce constraints that created a cycle of vulnerability and error.

The results demonstrate that the human contribution to IT vulnerability remains undeniable. Most respondents (58%) cited human error as a leading reason for IT vulnerabilities. Yet what is  often characterized as a “technical” factor–insufficient patching–was judged just as important as human error, with 57% of those surveyed citing it as a leading reason for vulnerability. But insufficient patching–meaning either that patches are not applied frequently enough, or they are not applied consistently enough–is a factor firmly rooted in human behavior.

The problem of human error may be even bigger than these statistics suggest. Digging deeper, the survey found that nearly half (49%) of respondents cited available workforce as a constraint on patching; 62% said they would apply a wider variety of patches more frequently if they had additional personnel to do so. When asked about major contributors to failed patches, 84% of respondents pointed to human error.

Not surprisingly, a separate question found people were less trusted to keep organization IT systems safe (16%) than were machines (31%).

A role for AI.

Enterprises will never be able to hire enough humans to fix problems created by human error. Existing methods that companies and agencies employ leave considerable room for error, with 25% of all security patches deployed by organizations failing to reach every device. When U.S.-based CEOs cite cybersecurity as the number one external concern for their business in 2019, the simple act of patching (and failing) is a huge unresolved risk. Why should humans have to fix this problem alone, when autonomous capabilities are available to improve IT security.

Some vendors may place blame on customers for incorrectly configuring computing resources, but Oracle understands customers deserve more, and should have the option to do less, which is why we use autonomous technologies to secure customer systems. With the cybersecurity workforce gap growing, deploying AI is the best way to supplement human abilities and protect systems more efficiently.

Oracle’s investment of $6 billion each year to address problems like automated patching and security configuration translates into real action taken to protect our customers. Oracle created the world’s first autonomous database, a defining technology for this next generation of AI-led software and cloud services. Our self-repairing, self-tuning, self-patching autonomous technology helps protect businesses and governments from threats wherever they originate.

By taking on this responsibility, Oracle empowers enterprises to redeploy the scarce human talent of their cybersecurity workforce to address new priorities, not basic security hygiene. Enterprises that adopt autonomous technologies broadly and quickly will help eliminate an entire category of cyber risk (patching)–and stop repeating the pattern of human-error induced insecurity in the future.

Tracking Logs Inside a Documentum Container (part II)

Yann Neuhaus - Wed, 2020-01-29 18:45
Testing the log watcher

This is part II of the article. Part I is here.
All the above code has to be included in the entrypoint script so it gets executed at container start up time but it can also be tested more simply in a traditional repository installation.
First, we’ll move the code into a excutable script, e.g. entrypoint.sh, and run it in the background in a first terminal. Soon, we will notice that lots of log messages get displayed, e.g. from jobs executing into the repository:

---------------------------------------
Job Arguments:
(StandardJobArgs: docbase_name: dmtest.dmtest user_name: dmadmin job_id: 0800c35080007042 method_trace_level: 0 )
window_interval=120
queueperson=null
max_job_threads=3
 
---------------------------------------
2020-01-02T07:03:15.807617 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
Executing query to collect XCP automatic tasks to be processed:
select r_object_id from dmi_workitem where a_wq_name = 'to be processed by job' and r_auto_method_id != '0000000000000000' and (r_runtime_state = 0 or r_runtime_state = 1) order by r_creation_date asc
Report End 2020/01/02 07:03:15
2020-01-02T07:03:16.314586 5888[5888] 0100c3508000ab90 [DM_SESSION_I_SESSION_QUIT]info: "Session 0100c3508000ab90 quit."
Thu Jan 02 07:04:44 2020 [INFORMATION] [LAUNCHER 6311] Detected during program initialization: Version: 16.4.0080.0129 Linux64
2020-01-02T07:04:44.736720 6341[6341] 0100c3508000ab91 [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab91 started for user dmadmin."
Thu Jan 02 07:04:45 2020 [INFORMATION] [LAUNCHER 6311] Detected while preparing job dm_Initialize_WQ for execution: Agent Exec connected to server dmtest: [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab91 started for user dmadmin."
 
 
2020-01-02T07:04:45.686337 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
2020-01-02T07:04:45.698970 1597[1597] 0100c3508000ab7f [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab7f is owned by user dmadmin now."
Initialize_WQ Report For DocBase dmtest.dmtest As Of 2020/01/02 07:04:45
 
---------------------------------------
Job Arguments:
(StandardJobArgs: docbase_name: dmtest.dmtest user_name: dmadmin job_id: 0800c3508000218b method_trace_level: 0 )
window_interval=120
queueperson=null
max_job_threads=3
 
---------------------------------------
Starting WQInitialisation job:
2020-01-02T07:04:45.756339 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
Executing query to collect Unassigned worktiems to be processed:
select r_object_id, r_act_def_id from dmi_workitem where a_held_by = 'dm_system' and r_runtime_state = 0 order by r_creation_date
Total no. of workqueue tasks initialized 0
Report End 2020/01/02 07:04:45
2020-01-02T07:04:46.222728 6341[6341] 0100c3508000ab91 [DM_SESSION_I_SESSION_QUIT]info: "Session 0100c3508000ab91 quit."
Thu Jan 02 07:05:14 2020 [INFORMATION] [LAUNCHER 6522] Detected during program initialization: Version: 16.4.0080.0129 Linux64
2020-01-02T07:05:14.828073 6552[6552] 0100c3508000ab92 [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab92 started for user dmadmin."
Thu Jan 02 07:05:15 2020 [INFORMATION] [LAUNCHER 6522] Detected while preparing job dm_bpm_XCPAutoTaskMgmt for execution: Agent Exec connected to server dmtest: [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab92 started for user dmadmin."
 
 
2020-01-02T07:05:15.714803 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
2020-01-02T07:05:15.726601 1597[1597] 0100c3508000ab7f [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab7f is owned by user dmadmin now."
bpm_XCPAutoTaskMgmt Report For DocBase dmtest.dmtest As Of 2020/01/02 07:05:15
 
---------------------------------------
 

Then, from a second terminal, we’ll start and stop several idql sessions and observe the resulting output. We will notice the familiar lines *_START and *_QUIT from the session’s logs:

---------------------------------------
2020-01-02T07:09:16.076945 1600[1600] 0100c3508000ab80 [DM_SESSION_I_ASSUME_USER]info: "Session 0100c3508000ab80 is owned by user dmadmin now."
Executing query to collect XCP automatic tasks to be processed:
select r_object_id from dmi_workitem where a_wq_name = 'to be processed by job' and r_auto_method_id != '0000000000000000' and (r_runtime_state = 0 or r_runtime_state = 1) order by r_creation_date asc
Report End 2020/01/02 07:09:16
2020-01-02T07:09:16.584776 7907[7907] 0100c3508000ab97 [DM_SESSION_I_SESSION_QUIT]info: "Session 0100c3508000ab97 quit."
2020-01-02T07:09:44.969770 8080[8080] 0100c3508000ab98 [DM_SESSION_I_SESSION_START]info: "Session 0100c3508000ab98 started for user dmadmin."
2020-01-02T07:09:47.329406 8080[8080] 0100c3508000ab98 [DM_SESSION_I_SESSION_QUIT]info: "Session 0100c3508000ab98 quit."
...

So, inotifywatch is pretty effective as a file watcher.

Let’se see how many tail processes are currently running:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 4818 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 4846 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 4850 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f
1 8375 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c3508000218b
1 8389 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab99
1 8407 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/Initialize_WQDoc.txt
1 8599 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c35080000386
1 8614 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9a
1 8657 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9b
1 8673 24411 9328 pts/1 8723 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/DataDictionaryPublisherDoc.txt

And after a while:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 4818 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 4846 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 4850 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f
1 8599 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c35080000386
1 8614 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9a
1 8657 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9b
1 8673 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/DataDictionaryPublisherDoc.txt
1 8824 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c35080007042
1 8834 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9c
1 8852 24411 9328 pts/1 9132 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/bpm_XCPAutoTaskMgmtDoc.txt

Again:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 10058 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 10078 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c3508000218b
1 10111 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab9f
1 10131 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 10135 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f
1 10139 24411 9328 pts/1 10252 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/Initialize_WQDoc.txt

And again:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 10892 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 10896 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/job_0800c3508000218b
1 10907 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000aba1
1 10921 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 10925 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f
1 10930 24411 9328 pts/1 11022 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/sysadmin/Initialize_WQDoc.txt

And, after disabling, the aggregation of new logs:

$ echo 0 > $tail_on_off
$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
1 24676 24277 9328 pts/1 26096 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/agentexec/agentexec.log
1 24710 24277 9328 pts/1 26096 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab80
1 24714 24277 9328 pts/1 26096 S 1001 0:00 tail --follow=name --retry /app/dctm/dba/log/0000c350/dmadmin/0100c3508000ab7f

And eventually:

$ psg tail | grep -v gawk
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
<nothing>

From now on, until the aggregation is turned back on, this list will be empty as no new tail commands will be started.
The number of tail commands grows and shrinks as expected, so the process clean up is working.
The agentexec.log file is often here because jobs are started frequently and therefore this file regularly triggers the MODIFY event. dmadmin and sysadmin owned session logs come and go for each started job. The DataDictionaryPublisherDoc.txt, bpm_XCPAutoTaskMgmtDoc.txt and Initialize_WQDoc.txt are a few followed job’s logs.
In the used test system, an out of the box docbase without any running applications, the output is obviously very quiet with long pauses in between but it can become extremely dense in a busy system. When investigating problems in such systems, it can be useful to redirect the whole output to a text file and use one’s favorite editor to search it at leisure. It is also possible to restrict it to an as narrow as desired time window (if docker logs is used) in order to reduce its noise, exclude files from being watched (see the next paragraph) and even to stop aggregating new logs via the $tail_on_off file so only the currently ones are followed as long as they are active (i.e. written in).

Dynamically reconfiguring inotifywait via a parameter file

In the preceding examples, the inotifywait command has been passed a fixed, hard-coded subtree name to watch. In certain cases, it could be useful to be able to watch a different, random sub-directory or list of arbitrary files in unrelated sub-directories with a possible list of excluded files; for even more flexibility, the other inotifywait’s parameters could also be changed dynamically. In such cases, the running inotifywait command will have to be stopped and restarted to change its command-line parameters. One could imagine to check a communication file (like the $tail_on_off file above) inside the entrypoint’s main loop, as shown below. Here the diff is relative to the precedent static parameters version:

10a11,13
> # file that contains the watcher's parameters;
> export new_inotify_parameters_file=${watcher_workdir}/inotify_parameters_file
> 
15a19,20
>    inotify_params="$1"
>  
19c24
    eval $private_inotify ${inotify_params} $watcher_workdir | gawk -v tail_timeout=$((tail_timeout * 60)) -v excluded_files="$excluded_files" -v bMust_tail=1 -v tail_on_off=$tail_on_off -v heartbeat_file=$heartbeat_file -v env_private_tail=private_tail -v FS="|" -v Apo="'" 'BEGIN {
122a128,142
> process_new_inotify_parameters() {
>    # read in the new parameters from file $new_inotify_parameters_file;
>    # first line of the $new_inotify_parameters_file must contains the non-target parameters, e.g. -m -e create,modify,attrib -r --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T %e %w%f";
>    new_params=$(cat $new_inotify_parameters_file)
>    rm $new_inotify_parameters_file
>  
>    # kill current watcher;
>    pkill -f $private_inotify
> 
>    # kill the current private tail commands too;
>    pkill -f $private_tail
> 
>    # restart inotify with new command-line parameters taken from $new_inotify_parameters_file;
>    follow_logs "$new_params" &
> }
138,139c158,162
< # start the watcher;
 # default inotifywait parameters;
> # the creation of this file will trigger a restart of the watcher using the new parameters;
> cat - < $new_inotify_parameters_file
>    --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format '%T|%e|%w%f' \${DOCUMENTUM}/dba/log --exclude \$new_inotify_parameters_file
> eot
143a167
>    [[ -f $new_inotify_parameters_file ]] && process_new_inotify_parameters

To apply the patch, save the first listing into a file, e.g. inotify_static_parameters.sh, the above diff output into a file, e.g. static2dynamic.diff, and use the command below to create the script inotify_dynamic_parameters.sh:

patch inotify_static_parameters.sh static2dynamic.diff -o inotify_dynamic_parameters.sh

In order not to trigger events when it is created, the $new_inotify_parameters_file must be excluded from inotifywait’s watched directories.
If, for instance, we want later to exclude jobs’ logs in order to focus on more relevant logs, we could use the following inotifywait’s parameters:

# cat - <<-eot > $new_inotify_parameters_file
   --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T|%e|%w%f" \${DOCUMENTUM}/dba/log --exclude '${DOCUMENTUM}/dba/log/dmtest/agentexec/job_.*' --exclude $new_inotify_parameters_file
eot

From the outside of a container:

docker exec <container> /bin/bash -c 'cat - < $new_inotify_parameters_file
   --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T|%e|%w%f" \${DOCUMENTUM}/dba/log --exclude "\${DOCUMENTUM}/dba/log/dmtest/agentexec/job_.*" --exclude \$new_inotify_parameters_file
eot
'

where the new option −−exclude specifies a POSIX-compliant regular expression to be quoted so the shell does not perform its variable expansions.
After at most $pause_duration seconds, this file is detected (line 34), read (line 14), deleted (line 15) and the current watcher gets restarted in the background with the new parameters (line 24).
This approach is akin to polling the parameter file and it looks a bit unsophisticated. We use events to control the tailing through the $tail_on_off file and for the heartbeat. Why not for the parameter file ? The next paragraph will show just that.

Dynamically reconfiguring inotifywait via an event

A better way would be to use the watcher itself to check if the parameter file has changed ! After all, we shall put our money where our mouth is, right ?
In order to do this, and dedicated watcher is set up to watch the parameter file. For technical reasons (in order to watch a file, that file must already exist. Also, when the watched file is erased, inotifywait does not react to any events on that file any longer; it just sits there idly), instead of watching $new_inotify_parameters_file, it watches its parent directory. To avoid scattering files in too many locations, the parameter file will be stored along with the technical files in ${watcher_workdir} and, in order to avoid impacting the performance, it will be excluded from the main watcher (parameter –exclude \$new_inotify_parameters_file, do not forget to append it otherwise the main watcher will raise events for nothing; it should not impede the parameter file to be processed though).
When a MODIFY event on this file occurs, which includes a CREATE event if the file did not pre-exist, the event is processed as shown precedently.
Here the diff compared to the preceding polled parameter file version:

143a144,151
> # the parameter file's watcher;
> param_watcher() {
>    IFS="|"
>    inotifywait --quiet --monitor --event create,modify,attrib --timefmt "%Y/%m/%d-%H:%M:%S" --format '%T|%e|%w%f' $watcher_workdir --exclude \$tail_on_off \$heartbeat_file | while read timestamp event target_file; do
>       [[ $target_file == $new_inotify_parameters_file ]] && process_new_inotify_parameters
>    done
> }
> 
162a171,173
> 
> # starts the parameter file's watcher;
> param_watcher &

The delta is quite short with just the function param_watcher() to start the watcher on the parameter file and process its signal by invoking the same function process_new_inotify_parameters() introduced in the polling version.
To apply the patch, save the above diff output into a file, e.g. dynamic2event.diff, and use the command below to create the script inotify_dynamic_parameters_event.sh from the polling version’s script inotify_dynamic_parameters.sh created above:

patch inotify_dynamic_parameters.sh dynamic2event.diff -o inotify_dynamic_parameters_event.sh

The dedicated watcher runs in the background and restarts the main watcher whenever the latter receives new parameters. Quite simple.

Dynamically reconfiguring inotifywait via signals

Yet another way to interact with inotifywait’s script (e.g. the container’s entrypoint) could be through signals. To this effect, we’ll need first to choose suitable signals, say SIGUSR[12], define a trap handler and implement it. And while we are at it, let’s also add switching the watcher on and off, as illustrated below:

2a3,5
> trap 'bounce_watcher' SIGUSR1
> trap 'toggle_watcher' SIGUSR2
> 
7a11,14
> # new global variable to hold the current tail on/off status;
> # used to toggle the watcher;
> export bMust_tail=1
> 
144,149c151,162
< # the parameter file's watcher;
< param_watcher() {
<    IFS="|"
<    inotifywait --quiet --monitor --event create,modify,attrib --timefmt "%Y/%m/%d-%H:%M:%S" --format '%T|%e|%w%f' $watcher_workdir --exclude \$tail_on_off \$heartbeat_file | while read timestamp event target_file; do
<       [[ $target_file == $new_inotify_parameters_file ]] && process_new_inotify_parameters
 # restart the watcher with new parameters;
> bounce_watcher() {
>    [[ -f $new_inotify_parameters_file ]] && process_new_inotify_parameters
>    echo "parameters reloaded"
> }
> 
> # flip the watcher on/off;
> # as file $tail_on_off is watched, modifying it will trigger the processing of the boolean bMust_tail in the gawk script;
> toggle_watcher() {
>    (( bMust_tail = (bMust_tail + 1) % 2 ))
>    echo $bMust_tail > $tail_on_off
>    echo "toggled the watcher, it is $bMust_tail now"
172,173c185,187
< # starts the parameter file's watcher;
 process_new_inotify_parameters
> 
> echo "process $$ started"
178d191
<    [[ -f $new_inotify_parameters_file ]] && process_new_inotify_parameters

To apply the patch, save the above diff output into a file, e.g. event2signals.diff, and use the command below to create the script inotify_dynamic_parameters_signals.sh from the event version’s script inotify_dynamic_parameters_event.sh created above:

patch inotify_dynamic_parameters_events.sh event2signals.diff -o inotify_dynamic_parameters_signals.sh

The loop is simpler now as the functions are directly invoked by outside signals.
To use it, just send the SIGUSR[12] signals to the container, as shown below:

# write a new configuration file:
...
# and restart the watcher:
$ docker kill --signal SIGUSR1 <container>
 
# toggle the watcher;
$ docker kill --signal SIGUSR2 <container>

If testing outside a container, the commands would be:

/bin/kill --signal SIGUSR1 pid
# or:
/bin/kill --signal SIGUSR2 pid

where pid is the pid displayed when the script starts.
We won’t indulge more on signals, see How to stop Documentum processes in a docker container, and more (part I) for many more examples of using signals to talk to containers.

Conclusion

inotifywait is simple to configure and extremely fast, pratically instantaneous, at sending notifications. Although the aggregated output looks a bit confusing when the system is loaded, there are several ways to reduce its volume to make it easier to use. It is an interesting addition to an administrator to help troubleshooting repositories and their client applications. With suitable parameters, it can be used to support any containerized or traditional installations. It is one of those no frills tools that just work (mostly) as expected. If you ever happen to need a file watcher, consider it, you won’t be disappointed and it is fun to use.

Cet article Tracking Logs Inside a Documentum Container (part II) est apparu en premier sur Blog dbi services.

Tracking Logs Inside a Documentum Container (part I)

Yann Neuhaus - Wed, 2020-01-29 18:44

Containers running under docker can have their stdout observed from the outside through the “docker logs”command; here is an excerpt of its usage:

docker logs --help
Usage:	docker logs [OPTIONS] CONTAINER

Fetch the logs of a container

Options:
      --details        Show extra details provided to logs
  -f, --follow         Follow log output
      --since string   Show logs since timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g. 42m
                       for 42 minutes)
      --tail string    Number of lines to show from the end of the logs (default "all")
  -t, --timestamps     Show timestamps
      --until string   Show logs before a timestamp (e.g. 2013-01-02T13:23:37) or relative (e.g.
                       42m for 42 minutes)

e.g.:

Example of output:

docker logs --follow --timestamps container05bis
...
2019-07-10T03:50:38.624862914Z ==> /app/dctm/dba/log/docbroker.container05bis.1489.log <==
2019-07-10T03:50:38.624888183Z OpenText Documentum Connection Broker (version 16.4.0000.0248 Linux64)
2019-07-10T03:50:38.624893936Z Copyright (c) 2018. OpenText Corporation
2019-07-10T03:50:38.624898034Z 2019-07-10T05:50:38.519721 [DM_DOCBROKER_I_START]info: "Docbroker has started. Process id: 35"
2019-07-10T03:50:38.624902047Z 2019-07-10T05:50:38.521502 [DM_DOCBROKER_I_REGISTERED_PORT]info: "The Docbroker registered using port (1489)."
2019-07-10T03:50:38.624906087Z 2019-07-10T05:50:38.521544 [DM_DOCBROKER_I_LISTENING]info: "The Docbroker is listening on network address: (INET_ADDR: family: 2, port: 1489, host: container05bis (192.168.33.7, 0721a8c0))"
2019-07-10T03:50:38.624911984Z
2019-07-10T03:50:38.624915369Z ==> /app/dctm/dba/log/dmtest05bis.log <==
2019-07-10T03:50:38.625040316Z
2019-07-10T03:50:38.625050474Z ==> /app/dctm/dba/log/dmtest05bis/agentexec/agentexec.log <==
2019-07-10T03:50:38.625055299Z Wed Jul 10 03:33:48 2019 [INFORMATION] [LAUNCHER 4251] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625059065Z Wed Jul 10 03:34:18 2019 [INFORMATION] [LAUNCHER 4442] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625071866Z Wed Jul 10 03:34:48 2019 [INFORMATION] [LAUNCHER 4504] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625075268Z Wed Jul 10 03:36:18 2019 [INFORMATION] [LAUNCHER 4891] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625078688Z Wed Jul 10 03:36:49 2019 [INFORMATION] [LAUNCHER 4971] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625082182Z Wed Jul 10 03:48:18 2019 [INFORMATION] [LAUNCHER 6916] Detected during program initialization: Version: 16.4.0000.0248 Linux64
2019-07-10T03:50:38.625096886Z
2019-07-10T03:50:38.625101275Z ==> /app/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs/ServerApps.log <==
2019-07-10T03:50:38.625105098Z at io.undertow.servlet.core.DeploymentManagerImpl.start(DeploymentManagerImpl.java:511)
2019-07-10T03:50:38.625108575Z at org.wildfly.extension.undertow.deployment.UndertowDeploymentService.startContext(UndertowDeploymentService.java:101)
2019-07-10T03:50:38.625112342Z at org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:82)
2019-07-10T03:50:38.625116110Z at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
2019-07-10T03:50:38.625120084Z at java.util.concurrent.FutureTask.run(FutureTask.java:266)
2019-07-10T03:50:38.625123672Z at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2019-07-10T03:50:38.625127341Z at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2019-07-10T03:50:38.625131122Z at java.lang.Thread.run(Thread.java:748)
2019-07-10T03:50:38.625134828Z at org.jboss.threads.JBossThread.run(JBossThread.java:320)
2019-07-10T03:50:38.625139133Z 05:34:58,050 INFO [ServerService Thread Pool -- 96] com.documentum.cs.otds.DfSessionHandler - DFC Client Successfully initialized
2019-07-10T03:50:38.625143291Z
2019-07-10T03:50:38.625146939Z ==> /app/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs/ServerApps_trace.log <==
2019-07-10T03:50:38.625150991Z
2019-07-10T03:50:38.625154528Z ==> /app/dctm/wildfly9.0.1/server/DctmServer_MethodServer/logs/dmotdsrest.log <==
2019-07-10T03:50:38.625159563Z 03:33:16,505 [ServerService Thread Pool -- 66] DFC Client Successfully initialized
2019-07-10T03:50:38.625163445Z 05:34:58,050 [ServerService Thread Pool -- 96] DFC Client Successfully initialized
2019-07-10T03:50:38.625955045Z Setting up watches. Beware: since -r was given, this may take a while!
2019-07-10T03:50:38.627044196Z Watches established.
2019-07-10T03:50:38.934648542Z
2019-07-10T03:50:38.934668467Z ==> /app/dctm/dba/log/dmtest05bis.log <==
2019-07-10T03:50:38.934673076Z Wed Jul 10 05:50:38 2019[DM_STARTUP_W_DOCBASE_OWNER_NOT_FOUND] *** warning *** : The database user, dmtest05bisc as specified by your server.ini is not a valid user as determined using the system password check api. This will likely severly impair the operation of your docbase.
2019-07-10T03:50:39.001793811Z
2019-07-10T03:50:39.001816266Z
2019-07-10T03:50:39.001821414Z OpenText Documentum Content Server (version 16.4.0000.0248 Linux64.Oracle)
2019-07-10T03:50:39.001825146Z Copyright (c) 2018. OpenText Corporation
2019-07-10T03:50:39.001828983Z All rights reserved.
2019-07-10T03:50:39.001832816Z
2019-07-10T03:50:39.005318448Z 2019-07-10T05:50:39.005068 193[193] 0000000000000000 [DM_SERVER_I_START_SERVER]info: "Docbase dmtest05bis attempting to open"
2019-07-10T03:50:39.005337158Z
2019-07-10T03:50:39.005342472Z 2019-07-10T05:50:39.005172 193[193] 0000000000000000 [DM_SERVER_I_START_KEY_STORAGE_MODE]info: "Docbase dmtest05bis is using database for cryptographic key storage"
...

If the information is plethoric, the above command can be narrowed to a time window by adding −−since and −−until restrictions, e.g.:

docker logs --timestamps --follow --since 5m <container>

Thus, everything sent to the container’s stdout can be aggregated and viewed very simply from one place, yielding a cheap console log. In particular, Documentum containers could expose their well-known log files to the outside world, e.g. the docbroker log, the content server(s) log(s) and the method server logs. To this effect, it would be enough to just start a “tail -F ” on those files from within the entrypoint as illustrated below:

tail -F ${DOCUMENTUM}/dba/log/docbroker.log ${DOCUMENTUM}/dba/log/dmtest.log ${DOCUMENTUM}/wildfly9.0.1/server/DctmServer_MethodServer/logs/ServerApps.log ...

The -F option guarantees that the logs continue being followed even after a possible rotation.
Admittedly, this output can be a bit hard to read because the logs are interlaced, i.e. lines or block of lines from different logs are displayed sorted by the time they were produced and not by their origin. Actually, this is a benefit because it makes it easier to find correlations between distinct, apparently unrelated events.
Viewing a particular log is still possible from without the container, e.g.:

docker exec <container_name> /bin/bash -c "tail -f \${DOCUMENTUM}/dba/log/dmtest05bis.log"

provided the tail command exists in the container, which is not obvious as there is a definitive will to make images as stripped down as possible.
As those files are statically known (i.e. at build time), such command could be defined as early as in the buildfile and invoked in its entrypoint script.
Unfortunately, the content server logs are not very verbose and the most useful messages are directed to session or ad hoc logs. The session logs are dynamically created for each new session with the session id as their name, which makes it unpredictable. Since those names are only known at run-time, the above buildfile’s “tail -F” command cannot include them and consequently they are not displayed by the “docker logs” command. The same applies to on-demand trace files with variable names, e.g. with a timestamp suffix.
So, is there a way to follow those dynamic session logs (or any dynamically named files at that) anyway ? An obvious way is to use a file or directory watcher to be notified of any created or modified file. If a watcher process running inside the container could wait for such conditions and signal any occurence thereof, a listener process, also running inside the container, could receive the notification and dynamically fork a tail -F command to follow that file. Externally, “docker logs” would continue displaying whatever is sent to the container’s stdout, including the newly discovered files.
Under Linux, we can use inotifywait as the file watcher. Let’s see how to set it up.

Installing inotifywait

Under a Debian Linux derivative such as Ubuntu, inotifywait can be easily installed from the inotify-tools package through the usual command:

sudo apt-get install inotify-tools

Under a Red Hat Linux derivative such as Centos, a two-step method is to first grab the rpm package from its on-line repository and then install it; the latest release as of this writing is the 3.14-9:

  curl https://download-ib01.fedoraproject.org/pub/epel/7/x86_64/Packages/i/inotify-tools-3.14-9.el7.x86_64.rpm -o inotify-tools-3.14-9.el7.x86_64.rpm
  sudo rpm -Uvh inotify-tools-3.14-9.el7.x86_64.rpm 

In order to verify that the installation was successful, just try to launch “inotifywait”:

$ inotifywait
No files specified to watch!

Good, the command exists; let’s now test it.

Testing inotifywait

The command inotifywait has the following invocation syntax:

inotifywait [-hcmrq] [-e  ] [-t  ] [--format  ] [--timefmt  ]  [ ... ]

The man page explains very well each of the parameters so, please, refer there for details.
As the whole point is to detect new or changed files whose name we don’t know at image build time, but whose location is known (e.g. a path such as ${DOCUMENTUM}/dba/log), we will be watching directories. In such a case, one parameter will be a list of directories to recursively watch for any new or modified files.
That command was designed to output to stdout any event whose occurence it was configured to wait for.
The default output format is the following:

watched_filename EVENT_NAMES event_filename

one line per file.
EVENT_NAMES is the event that occurred, in our case mainly one of CREATE or MODIFY as requested through the -e command-line parameter.
As we watch directories, watched_filename is the name of the watched directory where the event occured and event_filename, the created/modified file.
Here is an example of use as a test with custom formatting for a better presentation:

$ inotifywait --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T %e %w%f" /tmp &

The command will recursively (option −−recursive) watch the /tmp directory for any created or modified (option −−event …) file or a change of file attributes such as the timestamp or permissions. It will run in the background (option −−monitor) and issue events to stdout prefixed with a properly formatted time stamp (options −−timefmt and −−format).
With inotifywait running in the background, create now a dummy file in /tmp:

$ touch /tmp/xx
# the following lines get displayed:
2019/12/31-17:43:45 CREATE /tmp/xx
2019/12/31-17:43:45 ATTRIB /tmp/xx

Actually, the /tmp directory is a busy directory and a lot of file activity occurs in there very quickly:

2019/12/31-17:43:52 CREATE /tmp/hsperfdata_dmadmin/471
2019/12/31-17:43:52 MODIFY /tmp/hsperfdata_dmadmin/471
2019/12/31-17:43:52 MODIFY /tmp/hsperfdata_dmadmin/471
2019/12/31-17:44:06 MODIFY /tmp/nohup.log
2019/12/31-17:44:06 MODIFY /tmp/nohup.log
2019/12/31-17:44:06 MODIFY /tmp/nohup.log
2019/12/31-17:45:06 MODIFY /tmp/nohup.log
2019/12/31-17:45:06 MODIFY /tmp/nohup.log
2019/12/31-17:45:06 MODIFY /tmp/nohup.log
2019/12/31-17:45:22 CREATE /tmp/hsperfdata_dmadmin/510
2019/12/31-17:45:22 MODIFY /tmp/hsperfdata_dmadmin/510
2019/12/31-17:45:22 MODIFY /tmp/hsperfdata_dmadmin/510
2019/12/31-17:45:50 CREATE /tmp/runc-process785366568
2019/12/31-17:45:50 MODIFY /tmp/runc-process785366568
...

Let’s see if the directory is really watched recursively:

$ mkdir -p /tmp/dir1/dir2
> 2019/12/31-17:49:51 CREATE,ISDIR /tmp/dir1
> 2019/12/31-17:49:51 CREATE,ISDIR /tmp/dir1/dir2

We notice that the directory creation is trapped too.

$ touch /tmp/dir1/dir2/xx
> 2019/12/31-17:49:59 CREATE /tmp/dir1/dir2/xx
> 2019/12/31-17:49:59 ATTRIB /tmp/dir1/dir2/xx

It works as advertised. Moreover, two events are raised here, CREATE for the file creation as it didn’t previously exist, and ATTRIB for the change of timestamp. Let’s verify this:

$ touch /tmp/dir1/dir2/xx
> 2019/12/31-17:50:01 ATTRIB /tmp/dir1/dir2/xx
$ touch /tmp/dir1/dir2/xx
> 2019/12/31-17:50:03 ATTRIB /tmp/dir1/dir2/xx

Indeed.
Let’s see if a change of attribute is also noticed:

$ chmod -r /tmp/dir1/dir2/xx
2019/12/31-18:03:50 ATTRIB /tmp/dir1/dir2/xx
$ chmod +r /tmp/dir1/dir2/xx
2019/12/31-18:03:55 ATTRIB /tmp/dir1/dir2/xx

It is, fine.

Using inotify in a [containerized] Documentum installation

inotifywait will be used with the syntax shown above against the Documentum log directory ${DOCUMENTUM}/dba/log. Its output will be piped into a gawk script that will spawn “tail -F” commands when needed and keep a list of such processes so they can be killed after a timeout of inactivity, i.e. when they are following a file that did not get updated within a given time interval (let’s jokingly name this value “tail time to live”, or TTTL). This is to prevent a potentially unlimited number of such processes to hog the system’s resources, not that they consume much CPU cycles but it is pointless to leave hundreds of such idle processes sitting in memory. So the script will start a tail command on a file upon its CREATE event, or a MODIFY event if not already followed, and clean up existing idle tail commands. As said before, the code below could be included into a container’s entrypoint to make it run constantly in the background.

#!/bin/bash

export watcher_workdir=/tmp/watcher
export tail_on_off=${watcher_workdir}/tail_on_off
export heartbeat_file=${watcher_workdir}/heartbeat_file
export pause_duration=3

# comma-separated list of files watched but excluded from tailing;
export excluded_files="$tail_on_off,$heartbeat_file"

# make private copies of inotifywait and tail so they can be easily identified and killed from the list of running processes;
export private_inotify=~/dctm-inotifywait
export private_tail=~/dctm-tail

follow_logs() {
   # tail time to live, maximum duration in minutes an unmodified tailed file will stay tailed before the tail is killed, i.e. TTTL ;-);
   tail_timeout=1
 
   $private_inotify --quiet --monitor --event create,modify,attrib --recursive --timefmt "%Y/%m/%d-%H:%M:%S" --format "%T|%e|%w%f" ${DOCUMENTUM}/dba/log $watcher_workdir | gawk -v tail_timeout=$((tail_timeout * 60)) -v excluded_files="$excluded_files" -v tail_on_off=$tail_on_off -v heartbeat_file=$heartbeat_file -v env_private_tail=private_tail -v FS="|" -v Apo="'" 'BEGIN {
      # get the dedicated tail command from the environment;
      private_tail=ENVIRON[env_private_tail]

      # get current time;
      cmd_now = "date +\"%Y/%m/%d-%H:%M:%S\""
 
      # get the time of the next check, i.e. now + the timeout delay;
      cmd_future = "date +\"%Y/%m/%d-%H:%M:%S\" --date=\"" tail_timeout  " seconds\""
      cmd_future | getline next_check_date; close(cmd_future)
 
      # find the running private tail commands;
      # use the same FS defined in the outer gawk scope to guarantee thet $0 can be split smoothly;
      cmd_tail = "pgrep -fa " private_tail " | gawk " Apo "{print $1 \"" FS "\" $NF}" Apo
 
      # get the most recent time a running idle tail command is allowed to live;
      cmd_oldest = "date +\"%Y/%m/%d-%H:%M:%S\" --date=\"" tail_timeout  " seconds ago\""
 
      # command to get the modifiy date of a tailed file;
      cmd_file_stamp = "date +\"%Y/%m/%d-%H:%M:%S\" --reference="
 
      # files can be excluded from tailing if specified in parameter exclude_files;
      # convert the space-separated list of excluded files to an associative array, which is easier to search as it is indexed by file names;
      nb_files = split(excluded_files, tmp_excluded_files, ",")
      for (;nb_files > 0; nb_files--)
         tab_excluded_files[tmp_excluded_files[nb_files]] = 0

      bMust_tail = 1
   }
   {
      # skip directories (they have a trailing /);
      if (match($3, /\/$/)) next
 
      # check if logs must be tailed;
      if (tail_on_off == $3 && ("CREATE" == $2 || "MODIFY" == $2)) {
         if ((getline bMust_tail < tail_on_off) >  0) {
            close(tail_on_off)
            system("rm " tail_on_off " 2>/dev/null")
         }
      } else 
         # tailing on ?
         if (bMust_tail && !($3 in current_tails))
            # CREATE event ?
            if ("CREATE" == $2 && !($3 in tab_excluded_files)) {
               # newly created file not in exclusion list: tail it !
               system(private_tail " --follow=name --retry " $3 " 2>/dev/null &")
               # ... and keep track of it;
               current_tails[$3] = 0
            }
            # MODIFY event ?
            else if ("MODIFY" == $2 && !($3 in tab_excluded_files)) {
               # modified file not in exclusion list nor already tailed: tail it !
               system(private_tail " --follow=name --retry " $3 " 2>/dev/null &")
               # ... and keep track of it;
               current_tails[$3] = 0
            }
 
      # clean up logic starts here;
      # only upon a heartbeat event;
      if ("ATTRIB" == $2 && $3 == heartbeat_file) {
         # time to check ?
         cmd_now | getline now; close(cmd_now)
         if (now >= next_check_date) {
            # maximum time to live for idle tails;
            cmd_oldest | getline oldest_date; close(cmd_oldest)
            # loop though all the running tail commands;
            while ((cmd_tail | getline) > 0) {
               # cannot explain this spurious entry, ignoring it explicitly eventhough it will anyway; 
               if ("$NF}" Apo == $2) continue
               tail_pid = $1
               tailed_file = $2
               # is it one of the watched files ?
               if (tailed_file in current_tails) {
                  # get the current tailed file last modification date;
                  (cmd_file_stamp tailed_file " 2>/dev/null") | getline last_modif_date; close(cmd_file_stamp tailed_file " 2>/dev/null")
                  # tailed file not updated during time to live period ?
                  if (last_modif_date <= oldest_date) {
                     # yes, kill the tailing process;
                     system("kill -9 " tail_pid " 2> /dev/null")
                     # ... and update the list of tailed files;
                     delete current_tails[tailed_file]
                  }
                  else current_tails[tailed_file] = 1
               }
               # else it should not happen because private tail commands are only issues from here and get tracked;
            }
            close(cmd_tail)

            # resynchronize internal list with actual tailed files;
            for (f in current_tails)
               if (0 == current_tails[f])
                  # tracked file not tailed any more (this should not happen because private tail commands are killed from here only), untrack it;
                  delete current_tails[f]
               else
                  # tracked file was checked and is still alive;
                  current_tails[f] = 0
 
            # compute the next check time;
            cmd_future | getline next_check_date; close(cmd_future)
         }
      }
   }'
}
 
 
# -----------
# main;
# -----------

# create an inotifywait alias so it is easily identified to be stopped later;
ln -s /usr/bin/inotifywait $private_inotify 2> /dev/null
ln -s /usr/bin/tail $private_tail           2> /dev/null

# create the watcher's workdir;
mkdir -p $watcher_workdir 2> /dev/null

# enable following new or modified logs;
echo 1 > $tail_on_off
 
# start the watcher;
follow_logs &

while [ true ]; do
   # send heartbeat to inotify;
   touch $heartbeat_file
   # do something else here;
   # ...
   sleep $pause_duration
done

Admittedly, the fact that gawk relies a lot on external commands and pipes (because of its lack of libraries, but this can be arranged by extending it, see other articles in this blog such this one) obfuscates somewhat the statements’ purpose. Also, all the logic is contained in the not-so-obvious automatic loop which is executed each time an event is notified. Anyway, as usual the code is only provided as an illustration.
On line 3, a “technical” sub-directory is defined and created later on line 134. It will contain the heartbeat file (file heartbeat_file, see next paragraph for more details) and a file to switch tailing on and off (file tail_on_off). This directory is watched (see the call to inotifywait on line 19).
One line 12 and 13, private versions of the commands inotifywait and tail are defined and created later on line 130 and 131 as symlinks; this is to facilitate their identification in the list of running processes to kill them if needed.
This script is encapsulated in function follow_logs() started on line 15. It is launched in the background on line 140.
On line 46 the boolean bMust_tail is initialized; it gets updated on line 53 upon a CREATE event on the file $tail_on_off; after it has been read into bMust_tail, the file is remove. By writing 0 or a positive number in it, aggregation of new logs is respectively disabled or enabled:

# disable following new logs;
# from within a container or in a traditional installation:
$ echo 0 > $tail_on_off
 
# or, for a container, supposing $tail_on_off is defined in the container's current user's environment:
$ docker exec container_name /bin/bash -c "echo 0 > \$tail_on_off"
 
# enable following new logs;
# from within a container or in a traditional installation:
$ echo 1 > $tail_on_off
 
# or, for a container, supposing $tail_on_off is defined in the container's current user's environment:
$ docker exec container_name /bin/bash -c "echo 1 > \$tail_on_off"

Currently running tail commands are not impacted.
Note how the $ in $tail_on_off is escaped so it is not consumed by the docker’s host shell and is passed as-is to the container.
On line 63 and 70, private tails commands are started when new files have appeared in the watched sub-directory, or modified files which are not already tailed.
One line 79, a clean-up of idle (private) tail commands is performed and the internal associative array that keeps track of them is refreshed to make sure it reflects the currently running tails.
One line 142, the well-known container entrypoint’s never-ending loop is entered.
The above script can also be run stand-alone in a classic, non containerized installation.

Keeping the gawk script busy

A problem to solve is that the gawk script runs as a coroutine to inotify, i.e. it is synchronized with its output. If there is none, because no event were detected, the script blocks waiting for input, so no cleaning of idle tail commands gets done. This not such a big deal because no new tails commands are started either so the status quo is maintained. However, an easy work-around is possible: In order to force the script to enter the next iteration and give it a chance to perform the cleaning, we introduce a heartbeat with one watched file, e.g.:

...
while [ true ]; do
   # send heartbeat to inotify;
   touch $heartbeat_file
   # do something else here;
   # ...
   sleep $pause_duration
done


This explains the presence of the file $heartbeat_file in the list of inotifywait’s target (see line 19 in the code above).
Now, because its timestamp is updated, at least one event is always raised periodically, an ATTRIB on the heartbeat file $heartbeat_file, as shown before. Although ATTRIB events are listened to, no action is done upon them generally, except when occuring on $heartbeat_file in which case their sole purpose is to trigger the script’s execution, more precisely, the cleanup of inactive tail commands.

Let’s test it now.

Cet article Tracking Logs Inside a Documentum Container (part I) est apparu en premier sur Blog dbi services.

SSQ Unnesting

Jonathan Lewis - Wed, 2020-01-29 06:17

I hesitate to call something a bug simply because Oracle doesn’t do what I thought it would do; but when a trace file says:

“I’m not going to do X because P is not true

followed a little later by

“I’m going to do Y because P is true

then I think it’s safe to say there’s a bug there somewhere – even if it’s only a bug in the code that writes the trace file.

The example is this note is a feature that appeared in 12c (possibly only 12.2) – the ability to unnest scalar subqueries in the select list and transform them into outer joins. Here’s an example to demonstrate the mechanism:

rem
rem     Script:         ssq_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem

create table companies as
select
        rownum  id,
        dbms_random.string('U',30)      name
from
        all_objects
where
        rownum <= 1e4 -- > comment to avoid wordpress format issue
;

alter table companies add constraint com_pk primary key(id);

create table orders (
        id              number(10,0),
        id_company      number(10,0)    not null,
        date_placed     date,
        status          varchar2(1),
        items           number(3,0),
        valuation       number(6,2),
        constraint ord_fk_com foreign key(id_company) references companies,
        constraint ord_pk primary key(id)
)
/

insert into orders
select
        rownum,
        trunc(dbms_random.value(1,1e4))  id_company,
        sysdate - 100 + rownum/100        date_placed,
        chr(64+dbms_random.value(1,6))    status,
        trunc(dbms_random.value(1,11))    items,
        round(dbms_random.value(1,250),2) valuation
from
        all_objects
where
        rownum <= 1e4 -- > comment to avoid wordpress format issue
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'orders',
                method_opt  => 'for all columns size 1'
        );
end;
/

set serveroutput off
set linesize 180
set pagesize 60
set trimspool on

select
        /*+ 
                qb_name(main)
        */
        ord.id,
        (
        select 
                /*+ qb_name(company) */ 
                max(com.name)
        from    companies  com  
        where   com.id  = ord.id_company
        )   company,
        ord.valuation,
        ord.status
from
        orders          ord
where
        ord.date_placed > trunc(sysdate) - 1
/

select * from table(dbms_xplan.display_cursor(null,null,'alias'))
/

I’ve created an orders table with an id_company column that is declared as a foreign key to a companies table. When I’ve queried the orders table and reported the company associated with an order I’ve been a little silly and used a correlated scalar subquery in the select list to query the companies table instead of doing a simple join. In fact I’ve been more than a little silly because I’ve used an aggregate when the query is by primary key and can only return one row.

Here’s the execution plan (produced by 12.2 or 19.3)


--------------------------------------------------------------------------------
| Id  | Operation          | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |           |       |       |    33 (100)|          |
|*  1 |  HASH JOIN OUTER   |           |   143 |  8294 |    33  (22)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| ORDERS    |   143 |  3289 |    22  (23)| 00:00:01 |
|   3 |   TABLE ACCESS FULL| COMPANIES | 10000 |   341K|    10  (10)| 00:00:01 |
--------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - SEL$EBD4C958
   2 - SEL$EBD4C958 / ORD@MAIN
   3 - SEL$EBD4C958 / COM@COMPANY

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("COM"."ID"="ORD"."ID_COMPANY")
   2 - filter("ORD"."DATE_PLACED">TRUNC(SYSDATE@!)-1)

Note
-----
   - this is an adaptive plan

The optimizer has taken my query and turned it into a simple (outer) join between the two tables. I’ve included the Query Block / Alias information in the output so that you can see that Oracle really has generated a new query block by transforming the two separate query blocks in the original query.

Oracle has been very clever here – it has even recognised that the join is going to use a unique scan of a unique key so it has taken out the redundant aggregation step. In many cases where this type of scalar subquery unnesting is used you’re more likely to see a plan with one of the following shapes:


----------------------------------------------------------------------------------
| Id  | Operation            | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |           |       |       |    36 (100)|          |
|*  1 |  HASH JOIN OUTER     |           |   144 |   286K|    36  (28)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL  | ORDERS    |   144 |  3312 |    22  (23)| 00:00:01 |
|   3 |   VIEW               | VW_SSQ_1  | 10000 |    19M|    13  (31)| 00:00:01 |
|   4 |    HASH GROUP BY     |           | 10000 |   341K|    13  (31)| 00:00:01 |
|   5 |     TABLE ACCESS FULL| COMPANIES | 10000 |   341K|    10  (10)| 00:00:01 |
----------------------------------------------------------------------------------

-----------------------------------------------------------------------------------
| Id  | Operation             | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |           |       |       |    36 (100)|          |
|*  1 |  HASH JOIN OUTER      |           |   144 |   286K|    36  (28)| 00:00:01 |
|   2 |   JOIN FILTER CREATE  | :BF0000   |   144 |  3312 |    22  (23)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL  | ORDERS    |   144 |  3312 |    22  (23)| 00:00:01 |
|   4 |   VIEW                | VW_SSQ_1  | 10000 |    19M|    13  (31)| 00:00:01 |
|   5 |    HASH GROUP BY      |           | 10000 |   341K|    13  (31)| 00:00:01 |
|   6 |     JOIN FILTER USE   | :BF0000   | 10000 |   341K|    10  (10)| 00:00:01 |
|*  7 |      TABLE ACCESS FULL| COMPANIES | 10000 |   341K|    10  (10)| 00:00:01 |
-----------------------------------------------------------------------------------

---------------------------------------------------------------------------------
| Id  | Operation           | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |           |       |       |    34 (100)|          |
|   1 |  HASH GROUP BY      |           |   144 |  8784 |    34  (24)| 00:00:01 |
|*  2 |   HASH JOIN OUTER   |           |   144 |  8784 |    33  (22)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL| ORDERS    |   144 |  3744 |    22  (23)| 00:00:01 |
|   4 |    TABLE ACCESS FULL| COMPANIES | 10000 |   341K|    10  (10)| 00:00:01 |
---------------------------------------------------------------------------------

The first variation shows the creation of an aggregate view that is used in the join – note the generated view name vw_ssq_1 (ssq = scalar sub query). In the second variation Oracle has used a Bloom filter to reduce the volume of data passed up from the view to the hash join operator, and in the third variation Oracle has used complex view merging to handle the join before performing the aggregation.

The anomaly

Clearly the optimizer is smarter than I am with this query – it spotted that I didn’t need that max() aggregation and took it out. So maybe I should take a hint from the optimizer and edit the query to remove the max(). Here’s the plan I get if I do:


----------------------------------------------------------------------------------------
| Id  | Operation                   | Name      | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |           |       |       |   167 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| COMPANIES |     1 |    35 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN         | COM_PK    |     1 |       |     1   (0)| 00:00:01 |
|*  3 |  TABLE ACCESS FULL          | ORDERS    |   145 |  3335 |    22  (23)| 00:00:01 |
-----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
   1 - COMPANY / COM@COMPANY
   2 - COMPANY / COM@COMPANY
   3 - MAIN    / ORD@MAIN

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("COM"."ID"=:B1)
   3 - filter("ORD"."DATE_PLACED">TRUNC(SYSDATE@!)-1)

The optimizer doesn’t unnest the subquery (and it produces an execution plan that has a far higher cost than the unnested version). You can see from the Query Block information that the final query still consists of the two original query blocks, and the plan shows the standard “subquery in the select list” pattern – the main query block at the end of the plan with the scalar subquery(ies) above it. Even if I insert an /*+ unnest */ hint in the subquery the optimizer will not unnest the scalar subquery.

This really looks like a bug – which means I have to take a look at the CBO (10053) trace file; and here are the critical lines (from the 19.3 trace file which is significantly more informative than the 12.2 file):

...
SU: Checking validity of scalar subquery unnesting for query block COMPANY (#0)
SU: bypassed: Scalar subquery may return more than one row.
...

When we used an aggregate subquery the optimizer knew the aggregation was redundant because it was querying with equality on the primary key, so it eliminated the aggregation step from the plan; but when we don’t specify an aggregate the optimizer thinks the primary key will return more than one row!

So do we have a “documentation” bug where the trace file is simply reporting the wrong reason for bypassing unnesting, or do we have a logic bug where the optimizer makes a mistake when checking for uniqueness ? (I’d lile it to be the latter, and see a fix some time soon, of course.)

Conclusion

There are cases where the optimizer ought to be unnesting scalar subqueries in the select list but fails to do so for what seems to be a completely spurious reason. Unfortunately some people find it very convenient to write SQL that does primary key lookups as in-line scalar subqueries instead of joins; so if you find examples like this then (for the short term, at least) you might see some performance benefit by introducing a redundant max() operation in the scalar subquery.

Footnote

If you want to see more examples of how Oracle does, or doesn’t handle scalar subqueries in the select list there are a couple of articles on Nenad Noveljic’s blog one comparing how Oracle and SQL Server handle a particular case, the other looking at several other cases.

 

Command Line Interface (CLI) In Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Wed, 2020-01-29 05:06

Command Line Interface (CLI) In Oracle Cloud Infrastructure (OCI) CLI is an automation tool that lets you work with almost all of the services available in Oracle Cloud Infrastructure (OCI). To know more, check out K21Academy’s post at https://k21academy.com/oci54 that covers: • Overview of CLI • Advantages of using CLI Over Cloud Console • Steps […]

The post Command Line Interface (CLI) In Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Why no GUI installations anymore?

Tim Hall - Wed, 2020-01-29 03:50

I had the following comment on a RAC article yesterday.

“this way of training is not good. please train as previous method.”

I believe the person in question didn’t like that I no longer do those massive articles showing how to install RAC using the GUI. Instead I do automatic silent builds using Vagrant and shell scripts. My recent RAC articles just describe the Vagrant builds. I’ve written about this before, but I think it’s worth repeating.

GUI is dead to me!

For a long time I was only running the GUI installations to get screen shots for articles. All the installations I did at work were scripted. This started to feel wrong, as it felt like I was promoting something I don’t do, and don’t believe in. As a result, I stopped putting the GUI stuff in most of my articles also. Instead including the silent installation and configuration commands, and typically giving a link to my Vagrant builds.

GUI takes a long time!

I run lots of builds. Every weekend I am running Vagrant and Docker builds almost constantly on OL7, Windows 10 and macOS hosts. RAC builds of various versions. Data Guard builds of various versions. Single instance builds of various versions. Docker builds of Oracle and non-Oracle stuff. If I had to do those builds manually using the GUI I would be lucky to complete a couple a day.

Doing archaic style builds and gathering screen shots of them takes a long time, and that’s wasted time.

What about people learning the tech?

If you are trying to learn something new, you should start straight away using the silent builds. Try to ignore the GUI stuff even exists. Why?

  • Doing something like a RAC build using the GUI is really hard. There are loads of things to click on, and it’s really easy to make mistakes along the way. Typos and picking the wrong options etc. It’s far easier if the only thing you are having to change is the parameters in a property file.
  • If you do make a mistake in a manual build and have to start again, it’s soul destroying. Throwing away a days work and having to start again makes you want to cry, especially if you don’t get much time to do this anyway. I have people contacting me, spending weeks trying to fix things that are beyond repair. When they finally bite the bullet and start again their mystery problem goes away. In contrast, restarting an automatic build is no big deal. Throw it away, start a new build and go and grab some food. No drama.
  • The scripted builds make the individual steps incredibly clear. You can look at the individual scripts in order and see what is happening, and just run the builds and prove they actually work, rather than having to trust things haven’t changed since the article was written.
  • No professional DBA should be using a GUI for anything, if there is a possibility to script that action. If you have DBAs or consultants working that still use GUIs, you should be questioning if they are really the people you want with hands on your systems. Their work will be inaccurate, and they are wasting your money by wasting your time. In some cases taking days to complete what can be done in minutes. If that is the way we expect people to work, why teach them a different way of working as a beginner? It feels wrong to me. You are getting them into bad habits from day one.
  • You are going to have to learn the scripted way to build things at some point, so why not start there. I think we’ve been fooled into thinking the GUI is easier, but it’s really not. It is a burden, not a help for most things.

If you want to still use the GUI, that is your choice. Just don’t expect me to enable your bad choices, and please don’t try and get a job working with me. You are not welcome. Sorry, but not sorry.

What about people who are not focusing on DBA work?

Then the GUI is an even bigger disaster. If you are a developer who just needs a database to work with, use an automatic build to give you what you need and start doing the stuff you care about. Why would you waste time learning to install a database? Fire up an autonomous database for free. Use a vagrant build or a docker image to give you what you need.

I really don’t see a reasonable defence for using a GUI for basic, repeatable administrative tasks anymore.

Conclusion

This is how I feel about it. If other people feel differently and want to still keep producing those GUI build articles, that’s their choice, but I would urge them to consider the message they are sending to beginners. In this day and age, the last thing we need is a bunch of people making DBAs look like dinosaurs, and wasting time on stupid stuff when they could be spending that time preparing for the future!

Cheers

Tim…

Update: Based on my reply to a comment here and on Twitter.

  • Companies demand self-service calls to do everything. They don’t want to ask a DBA to do anything. The DBA must help provide these self-service features, which will require scripting.
  • Developers need Continuous Integration and Continuous Deployment (CI/CD). That is all scripted, including creation of short lived databases for some Test/QA systems.
  • Companies demand infrastructure as code. All config and builds in source control (like Git). You can’t put button presses in Git.
  • Companies are not paying you to waste time. If we did a race, with me doing a Vagrant build of RAC and you doing it manually, I would finish several hours before you. That’s wasted time and money. I’m not sure how your company thinks about this time usage.
  • I guess it’s harder to invest the time if you only ever do something once, but the counter argument is you should be confident to replace everything if something breaks. I am never confident without a script.

If you are using a GUI to do something that can be scripted, you are failing. You may not have realised it yet though. I’m sorry if that sounds harsh, but it’s what I believe.

Why no GUI installations anymore? was first posted on January 29, 2020 at 10:50 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Woo Commerce List of Top Selling Products

Nilesh Jethwa - Tue, 2020-01-28 22:49

Best Selling WooCommerce Products This Top selling product analysis report was built using Woocommerce Data Visualizer and adapted as a dashboard plugin for Woo Commerce reports

Credit: InfoCaptor WooCommerce Plugins

WooCommerce Sales Revenue by country Report

Nilesh Jethwa - Tue, 2020-01-28 22:44

Geographical Sales Analysis by all countries This Sales by country report was built using Dashboard Software tool and adapted as a replacement for Woocommerce Admin Dashboard plugin

Credit: InfoCaptor WooCommerce Plugins

WooCommerce Sankey flow Revenue Visualization Report

Nilesh Jethwa - Tue, 2020-01-28 22:39

Sankey Data Flow for analysing WooCommerce Data This Sankey Diagram analysis report was built using InfoCaptor Dashboard builder and adapted as a dashboard plugin for WooCommerce

Via: InfoCaptor WooCommerce Plugins

Revenue Report by USA States for Woo Commerce

Nilesh Jethwa - Tue, 2020-01-28 22:34

Geographical view of Sales by USA states This USA State map view for sales summary report was built using MySQL Dashboard and Table Chart Builder and adapted as a dashboard reporting plugin for WooCommerce

Origin: InfoCaptor WooCommerce Plugins

WooCommerce Reporting – Revenue Treemap Visualization

Nilesh Jethwa - Tue, 2020-01-28 22:28

This Treemap Data visualization for USA States was built using InfoCaptor SQL Dashboard builder and adapted as a comprehensive dashboard plugin for WooCommerce

Credit: InfoCaptor WooCommerce Plugins

WooCommerce Sales Report – Heatmap and drilldown

Nilesh Jethwa - Tue, 2020-01-28 22:24

This Sales heatmap report was built using the pivot table display available within InfoCaptor business intelligence tool and adapted as a reporting and dashboard plugin for WooCommerce

Original: InfoCaptor WooCommerce Plugins

WooCommerce Product Sales Reporting

Nilesh Jethwa - Tue, 2020-01-28 22:19

This Product Sales summary analysis report was built using InfoCaptor Charts and Table builder and adapted as a plugin for woo reports and dashboard

By: InfoCaptor WooCommerce Plugins

WooCommerce Top Products and Customers Reporting

Nilesh Jethwa - Tue, 2020-01-28 22:14

This Geographical Map view of USA for e-commerce sales was built using InfoCaptor Dashboard Software and adapted as a comprehensive plugin for woo-commerce reports and dashboard

By: InfoCaptor WooCommerce Plugins

Sales KPI Dashboard for WooCommerce

Nilesh Jethwa - Tue, 2020-01-28 22:09

This Sales KPI metrics report was built using Database Dashboard builder and adapted as an extensible reports plugin for wooCommerce

Original: InfoCaptor WooCommerce Plugins

WooCommerce Product Sales Metrics by Category and SKU variants

Nilesh Jethwa - Tue, 2020-01-28 22:04

This Pivot table analysis of Product Sales report was built using InfoCaptor Business Intelligence application and adapted as one of the most excellent plugin for e-commerce reporting

Via: InfoCaptor WooCommerce Plugins

Pages

Subscribe to Oracle FAQ aggregator