17 August 2025

Oracle APEX: How Application and Page Processes really execute at the same process point

Debugging a legacy application recently gave us a deeper understanding of the order in which Oracle Application Express executes Application Processes and Page Processes when they share the same process point.

In our case, the point of interest was "Before Header".

When discussing this with a few other developers, some believed the order was:

  1. First: Application processes – in sequence number order within the list of "Before Header" application processes

  2. Then: Page processes – in sequence number order within the list of "Before Header" page processes

Most of us (myself included) had never really checked — and weren't entirely sure.

To confirm, I created a simple, single-page test application with 5 processes. Each process wrote a message to a table called logtable.

  1. Application processes ("On Load: Before Header"):

    • test05 (sequence 5)

    • test15 (sequence 15)

    • test25 (sequence 25)

  2. Page processes ("Pre-rendering – Before Header"):

    • test10 (sequence 10)

    • test20 (sequence 20)

  3. Added a simple Classic Report to display logtable's contents ordered by execution sequence.

Here's a screenshot of the application processes:

Application Processes Screenshot

And here are the page processes:

Page Processes Screenshot 1

Page Process - Before header: sequence 10

Page Process test10 Screenshot

Page Process - Before header: sequence 20

Page Process test20 Screenshot

The output was unambiguous:

Application and page processes are executed together in strict sequence number order. They are interleaved.

Test Results Screenshot

What does this mean for us as Oracle APEX developers?

If your Oracle APEX application has both application and page processes at the same process point, the sequence number alone determines execution order - not whether it's an application or page process.

A practical tip:

If you want all page processes to run *after* all application processes, give them higher starting sequence numbers (e.g., 1010, 1020, …).

Happy APEXing!!

11 August 2025

Using PL/SQL to Seed and Publish Oracle APEX application translations

For multilingual applications, it can become a little tiresome to constantly have to reseed and republish your application while testing multiple languages or debugging language-switch problems.
The other day, while tracking down a tricky language-switch bug, I thought to myself that there must be a faster way to seed and publish.  So I pulled up the documentation for the APEX_LANG API and found these two procedures: apex_lang.seed_translations and apex_lang.publish_application.
Using these two procedures meant that I could write a little script to seed and publish my application (ID 12345) in French and Spanish:
begin
  apex_lang.seed_translations   
     (p_application_id => 12345, p_language => 'fr');
  apex_lang.publish_application 
     (p_application_id => 12345, p_language => 'fr');

  apex_lang.seed_translations
     (p_application_id => 12345, p_language => 'es');
  apex_lang.publish_application 
     (p_application_id => 12345, p_language => 'es');
end;
Note, however, that when running the above outside an APEX environment, this script fails to execute. This because the security group for this workspace needs to be manually set. The error message looks like this:
ORA-20001: Package variable g_security_group_id must be set.
ORA-06512: at "APEX_240200.WWV_FLOW_IMP", line 109
ORA-06512: at "APEX_240200.HTMLDB_LANG", line 328
To set the security group for the workspace, we first have to retrieve the workspace ID and then call the apex_util.set_security_group_id procedure. An additional complication is that a schema may be associated with more than one APEX workspace, but we'll deal with that shortly.
declare
  l_workspace_id  apex_workspaces.workspace_id%type;
begin
  -- get the workspace ID for this schema 
  -- (for now, let's assume there's only 1) 
  select workspace_id 
    into l_workspace_id
    from apex_workspaces;  
  --
  -- set security group for the Oracle APEX workspace 
  apex_util.set_security_group_id (l_workspace_id);
  
  -- seed and publish french
  apex_lang.seed_translations   
     (p_application_id => 12345, p_language => 'fr');
  apex_lang.publish_application 
     (p_application_id => 12345, p_language => 'fr');

  -- seed and publish spanish
  apex_lang.seed_translations
     (p_application_id => 12345, p_language => 'es');
  apex_lang.publish_application 
     (p_application_id => 12345, p_language => 'es');
end;
Well, that takes care of my immediate issue for application 12345 and the French and Spanish translations.  
But.... Wouldn't it be so much better if we could have a solution that would check whether any of my applications need seeding and publishing, and that would then seed and publish them as needed?
We can start building one by querying the APEX data dictionary view apex_application_trans_map.  The requires_synchronization column tells us which language translations need to be seeded and published. This query returns results for all of the potential multiple APEX workspaces associated with the current schema. So it handles the multiple workspace issue alluded to earlier in this blog post.
select t.*
  from apex_application_trans_map t
 where substr(t.requires_synchronization,1,1) = 'Y'
 order by t.primary_application_id,
          t.translated_app_language;
We've got pretty much everything that we need to put it all of this together as a procedure:

1. Let's loop through an SQL query that retrieves every language translation (translated_app_language) that needs to be synchronized for this schema along with its workspace, application name and ID.

2. Within the loop, whenever there's a change of workspace, we'll reset the security group.

3. We'll then seed and publish the application translation.
4. The procedure will accept a single parameter, p_app_id.  If this parameter is passed with a value of null, it will loop through, seed and publish all APEX applications that need to be synchronized.  This will be done across all the APEX workspaces associated with the current schema.
procedure seedAndPublishTranslationsApp (
  p_app_id in apex_application_trans_map.primary_application_id%type
) is
  l_workspace_id apex_workspaces.workspace_id%type := 0;
begin
  --
  -- Loop through all of this schema's workspaces that have applications
  -- with translation languages that require synchronisation.
  --
  for rec_trans in (
    select t.workspace,
           t.primary_application_name,
           t.primary_application_id,
           t.translated_app_language,
           w.workspace_id
      from apex_application_trans_map t
           join apex_workspaces w
             on w.workspace = t.workspace  -- to get workspace ID
     where t.primary_application_id = nvl(p_app_id, t.primary_application_id) -- null means all apps
       and substr(t.requires_synchronization, 1, 1) = 'Y'
     order by w.workspace_id,
              t.primary_application_id,
              t.translated_app_language
  ) loop
  
    -- If there's a change of workspace, (re)set the security group.
    if l_workspace_id <> rec_trans.workspace_id then
      dbms_output.put_line('Workspace: "' || rec_trans.workspace || '":');
      apex_util.set_security_group_id(rec_trans.workspace_id);
      l_workspace_id := rec_trans.workspace_id;
    end if;

    apex_lang.seed_translations(
      p_application_id => rec_trans.primary_application_id,
      p_language       => rec_trans.translated_app_language
    );

    apex_lang.publish_application(
      p_application_id => rec_trans.primary_application_id,
      p_language       => rec_trans.translated_app_language
    );

    dbms_output.put_line(
      'Seeded and published language: "' || rec_trans.translated_app_language ||
      '" for application: ' || rec_trans.primary_application_id || ' - ' ||
      rec_trans.primary_application_name
    );
  end loop;

end seedAndPublishTranslationsApp;
As I don't particularly like having free-floating procedures and functions cluttering up my database, I've made a package, translateTools, that encapsulates this procedure.  You can download the package here.
There are various ways that you can use this procedure.
For example, during development, I like to have a worksheet open that let's me quickly run
  exec translateTools.seedAndPublishTranslationsApp();
whenever I want to. It certainly saves me a lot of clicking within the APEX application development environment.

Another use-case could be to implement it as a regular job that will run nightly to ensure that there are no unpublished translations left lying around.

It's a small solution to a small problem. I hope that some of you will find it useful.

Happy APEXing and Happy Translating!!

04 June 2024

Reading and writing files on Amazon S3 from Oracle Autonomous Database and PL/SQL

In our mixed cloud environment of AWS and Oracle Autonomous databases, sometimes it's necessary to move files/blobs between the database and AWS S3 storage.  This post describes one way of moving them directly between the database and AWS S3 storage.  

We'll start by setting up an AWS user and user group with the permissions needed.  Then we'll create OCI credentials using PL/SQL and the DBMS_CLOUD package.  Finally, we'll move some image blobs over and back between our Oracle database's PL/SQL environment and AWS S3.


Stage One - create an AWS group, a user and get access keys


Create a User Group and assign the permission policy AmazonS3FullAccess to it.

Using the "User Groups -> Create group" wizard, we'll create a new group called S3FullAccess2024.  We'll also attach the Amazon-supplied policy "AmazonS3FullAccess" to this group









Create an S3 User

Login to your AWS Identity and Access Management (IAM) Console and create a new user.  Let's call this user S3PLSQL.  We'll follow the "Create User" wizard that we'll select from the Users option on the Identity and Access Management menu.


Create user - step 1: User details




Create user - step 2:
Set permissions by adding the user to the S3FullAccess2024 group
 that we created earlier




Create user - step 3: Review and create user




Create user - finished: User created



Create and get your Access Key and Secret Key

Next we'll view the user and create an Access Key.  To keep things as simple as possible for now, we'll select "Other" as our use-case.

Create Access key - step 1: Choose a use-case




Create Access key - step 2: Add an optional description




Create Access key - step 3: Retrieve the access key and secret


Important: Don't forget to make sure that you store your key and secret somewhere, as this will be the only time that you will see it.  If you lose your key later on, you'll have to create a new access key.





Stage Two - create an OCI credential using DBMS_CLOUD

The good news is that it's all in an Oracle environment from here on.  

Firstly, from an admin account, make sure that your database user (in this case "myuser") can execute procedures in the dbms_cloud package.
grant execute on dbms_cloud to myuser;

Connect as myuser and, using the Access Key (created earlier at the end of Stage One) as your username and the Secret Key as your password, create a credential using DBMS_CLOUD.CREATE_CREDENTIAL.  Let's call this credential "s3test_cred".

begin
  dbms_cloud.create_credential
  (credential_name => 's3test_cred',
   username => 'A*****************7U',
   password => 'y*********************************V');
end;
/
Once the above is complete, we now disconnect from the admin user and continue by connecting to the myuser database account



Stage Three - access and update an S3 bucket using PL/SQL and DBMS_CLOUD


Firstly, ensure that we're now connected using the myuser account (or whatever the name we gave to it is).  Using the credential that we've just created, we'll try and list the contents of an existing bucket.  In this case, let's use a pre-created bucket called plsql-s3 containing three image files.

Before we start, we'll just take a quick look at the bucket's contents using S3's web interface.  In this example, we can see that it currently contains 3 jpegs.






Let's start by listing all the files in the bucket using DBMS_CLOUD.LIST_OBJECTS.  This can be done with a simple SQL query.
 
We'll pass two parameters, the credentials that we created in Stage 2 and the path to the S3 bucket that we want to list.

-- list the contents of an S3 bucket
select f.*
  from dbms_cloud.list_objects
         ('s3test_cred'
         ,'https://s3.eu-west-1.amazonaws.com/plsql-s3/') f;

Query output - 3 jpegs




Let's get one of the files and read it into a PL/SQL blob variable using the DBMS_CLOUD.GET_OBJECT function.  We'll then check the length of the retrieved blob, just to show ourselves that the get was successful and that the blob is the same size as the file on S3.

-- read a file from S3 into a blob
set serveroutput on
declare
  l_file blob;
begin

  l_file := 
  dbms_cloud.get_object
    (credential_name => 's3test_cred',
     object_uri      
      => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep.jpeg');

  dbms_output.put_line
    ('retrieved blob length is: '||dbms_lob.getlength(l_file));
end;
/

retrieved blob length is: 2622899

PL/SQL procedure successfully completed.


Now we'll read the file into a blob and use the DBMS_CLOUD.PUT_OBJECT procedure to write a copy of this file as a new file to S3 using the retrieved blob.

set serveroutput on
declare
  l_file blob;
begin
  -- read the file from S3 into a blob
  l_file := 
  dbms_cloud.get_object
    (credential_name => 's3test_cred',
     object_uri      
      => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep.jpeg');

  -- using the blob that we read, we'll create a new file on S3
  dbms_cloud.put_object (
    credential_name => 's3test_cred',
    object_uri      
     => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep2.jpeg',
    contents => l_file);

end;
/

PL/SQL procedure successfully completed.


-- let's check if the new file "Sheep2.jpeg" has been created

select f.*
  from dbms_cloud.list_objects
         ('s3test_cred'
         ,'https://s3.eu-west-1.amazonaws.com/plsql-s3/') f;

Results: 






And, using the AWS web interface to verify the contents of our S3 bucket, we'll see that the new file Sheep2.jpeg is now visible in the bucket.



So now we are using the Oracle DBMS_CLOUD package and its LIST_OBJECTS, GET_OBJECT and PUT_OBJECT procedures to list, read and write from and to AWS S3.  

To follow on from here, we could, for example, load files into an Oracle database table using Oracle APEX and then move them to S3 for permanent storage.  The approach and techniques described above can be used in this and many other similar scenarios.

I hope that this is useful to some of you.  Wishing you lots of fun on your Oracle/AWS journeys!
 

P.S. added by popular demand, here's the Sheep Photo... 🐑

taken on a beautiful summer's day on Achill Island, Co. Mayo, Ireland       
© 2024 Niall Mc Phillips












19 August 2022

Correctly sorting data containing accented characters (a.k.a. the Côte d'Ivoire and Türkiye issue)

This will be short and sweet.  Hopefully it will be useful to some of you,

As you may or may not know, the country formerly known as Turkey recently changed its official name to Türkiye, even in English.  However, when sorting by country name, it should appear between Tunisia and Turkmenistan rather than later in the list.  (Official U.N. sorted list can be found here.)

In the weeks following this change, I was asked on at least five separate occasions to look at sorting issues that arose.  Please accept my apologies in advance if you already know what I'm about to write.  I, myself, thought that it was common knowledge, but recent experience has shown otherwise.

Below, I'll explain how I correctly sort by country within Oracle.  This technique can also apply to any other data (names, etc.) that need to take account of accented characters when sorting.  I hope that it's useful to some of you.

Note: What comes below applies principally to Latin alphabets, I have not tested on non-Latin alphabets, but I suspect that a similar approach exists.

First, demonstrating the incorrect sort

This is what some use by default, the problem is that accented characters are sorted after non-accented letters.  

So we can see that both Côte d'Ivoire and Türkiye are incorrectly sorted here and are placed after their peers.

SQL> select country_name from vw_temp_countries order by country_name; 

COUNTRY_NAME                                                                     
Cook Islands
Costa Rica
Croatia
Cuba
Curaçao
Cyprus
Czechia
Côte d'Ivoire
Tunisia
Turkmenistan
Tuvalu
Türkiye

13 rows selected. 


Now, a correct sort

With this sort, accented characters are taken into account and sorted appropriately.  The Oracle NLSSORT function is used to ensure the correct sort order.  The NLS_SORT parameter is set to swiss, as this setting accommodates most languages with Latin characters.

Here we can see that both Côte d'Ivoire and Türkiye are correctly sorted and are placed in their correct order.

SQL> select country_name from vw_temp_countries 
          order by nlssort(country_name, 'NLS_SORT = swiss');

COUNTRY_NAME                                                                     
Cook Islands
Costa Rica
Côte d'Ivoire
Croatia
Cuba
Curaçao
Cyprus
Czechia
Tunisia
Türkiye
Turkmenistan
Tuvalu

13 rows selected. 



That's it, as I promised "short and sweet" 😀

19 November 2021

How I got SQL Developer working on a new MacBook Pro (MacOS Monterey - M1 Pro)



Note: See updates 2021-11-23, 2022-02-17 and 2022-06-29 below concerning the use of GraalVM's JDK as an alternative to JDK 17. This is the solution that I am currently using.
I'm hoping that this will be of help to others facing similar issues.

I got my new MacBook Pro (M1 Pro) a few days ago, and then set it up by restoring a Time Machine backup from my MacMini (also M1 architecture) and quickly started using my new laptop.

Very soon, I saw that SQL Developer was crashing. Sometimes it would crash immediately, sometimes after a few minutes.

While looking for a solution, I took a look at some of the forum posts on https://community.oracle.com/tech/developers/categories/sql_developer. Most of what I did comes from what I gleaned reading various posts there. 100% of the credit goes to those that contributed in the forum.

I'll spare you all the various different combinations and attempts that I made that didn't work. The following is what actually worked for me.


Download and install JDK 17

I went to the Oracle Java Downloads page at https://www.oracle.com/java/technologies/downloads/

and downloaded the file: jdk-17.0.1_macos-aarch64_bin.dmg



Opened the .dmg and double-clicked on the JDK 17.0.1.pkg installation package to open the installer.


Followed all the steps to install JDK 17.




After the installation, I checked my folder /Library/Java/JavaVirtualMachines to verify that JDK 17 was installed there.



Change the SQL Developer products.conf file to use JDK 17

To make SQL Developer use the new JDK, I needed to locate and edit the products.conf file for my version of SQL Developer.  These files are found in the hidden.sqldeveloper directory under your home directory.



As you can see here there are a lot of directories from the various versions of SQL Developer that I've installed and used over the years.  My current version is 21.2.1, so this is the directory that I want to change my file in.






I edited the product.conf file using vi and added the following line to make sure that this version of SQL Developer would use the new JDK 17 that I installed.  The SetJavaHome entry sets the Java Home to the directory containing this newly installed version.

SetJavaHome /Library/Java/JavaVirtualMachines/jdk-17.0.1.jdk/Contents/Home


This is what that section of my product.conf file looked like after editing.


Start SQL Developer

When starting SQL Developer, I get an "Unsupported JDK version" page that immediately pops up.  I choose to ignore this warning and click on "Yes" to continue anyway.



The next message that I get concerns JavaFX. 



I was a little worried when I first saw this JavaFX pop-up, but then I read Jeff Smith's post from last November which was reassuring.  According to Jeff, JavaFX is only used in a few screens within SQL Developer, and I can certainly live with this issue for now.


Conclusion

So that's it.  I have a working version of SQL Developer on my MacBook Pro.
It worked for me.  I hope that it works for you or at least gets you moving closer to a solution.

Happy Developing!


Update 2021-11-23 - using GraalVM's JDK 11 as an alternative JDK

In his SQL Developer community forum post, Philipp Salvisberg suggests using the GraalVM's JDK 11 which can be downloaded here.  I have tested his solution and it works for me - even the Welcome Page of SQL Developer works using this method.  Thanks Philipp.




Note: I had to remove the quarantine attribute with the following command: 

sudo xattr -r -d com.apple.quarantine /Library/Java/JavaVirtualMachines/graalvm-ce-java11-21.2.0

 

Update 2022-02-17 - SQL Developer 21.4.2

I have just upgraded to 21.4.2.  I once again edited the product.conf file to point to the GraalVM JDK.  Works just fine for now.

SetJavaHome /Library/Java/JavaVirtualMachines/graalvm-ce-java11-21.2.0/Contents/Home



Update 2022-06-29 - SQL Developer 22.2.0

I have just upgraded to 22.2.0.  Works fine.  No issues to report for now.

30 August 2021

Autonomous DB "You have exceeded the maximum number of web service requests per workspace"

We recently had an experience on an Oracle Autonomous Database where our production instance started giving us lots of errors saying:

ORA-20001: You have exceeded the maximum number of web service requests per workspace. Please contact your administrator.

As these two blog posts tell us, in a self-managed or in-house APEX installation, the page for changing the Maximum Web Service Requests parameter can be found under "Security settings -> Workspace Isolation -> Maximum Web Service Requests".  We can increase the parameter there and fix the issue.

However, on the Autonomous DB these pages are not available.  So the questions become: Can we change this parameter? and if so, how and where?

A further flurry of Googling and a deeper dive into the Oracle Documentation led us to the following page: https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/apex-web-services.html#GUID-DA24C605-384D-4448-B73C-D00C02F5060E

Here we see that there is an APEX instance-level parameter calleMAX_WEBSERVICE_REQUESTS which can be queried and modified using the APEX_INSTANCE_ADMIN package.  The default value of this parameter on an Autonomous DB is currently 50'000 outgoing requests in a rolling 24-hour period.  To run this package you must be connected as the ADMIN user. 

To view the current value of 
MAX_WEBSERVICE_REQUESTS, we can execute the following query that uses the get_parameter function.  

select apex_instance_admin.get_parameter
          ('MAX_WEBSERVICE_REQUESTS') as requests
  from dual;

REQUESTS
--------
50000

To change this value, we can use the SET_PARAMETER procedure:

begin
 apex_instance_admin.set_parameter
     ('MAX_WEBSERVICE_REQUESTS', '250000');  -- increase to 250'000
 commit;
end;
/

If we rerun the preceding query again, we now get a different result that confirms that our change has worked.

select apex_instance_admin.get_parameter
          ('MAX_WEBSERVICE_REQUESTS') as requests
  from dual;

REQUESTS
--------
250000

I hope that this blog post helps someone out there avoid the minor panic that we experienced for a short while today.

Happy APEXing to all!

23 January 2021

Making XML tags dynamic in SQL and PL/SQL

While trying to produce XML using Oracle's native XML functions, I needed some of the XML tags to be dynamic.  To simplify and illustrate the problem that I encountered, I'll show an example that uses the time-tested, traditional EMP and DEPT tables.  

Let's say that we need to produce something like this for all departments.

<departments> <accounting> <employee>Clark</employee> <employee>King</employee> <employee>Miller</employee> </accounting> ... </departments>


Let's start with a short SQL to get an aggregated employee list for each department.  The result is four rows of xmltype - one for each department

select xmlelement("department", xmlagg(xmlelement("employee", initcap(e.ename)))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno;

Result (4 rows):

<department> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </department>

<department> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </department>

<department> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </department>

<department> <employee></employee> </department>


Next we'll aggregate these inside a single outer tag called "departments"


select xmlelement("departments", xmlagg( xmlelement("department", xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <department> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </department> <department> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </department> <department> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </department> <department> <employee></employee> </department> </departments>


Now we'll try to change the <department> tag to have the value of the actual department name by replacing xmlelement("department",  with xmlelement(dname,   and we'll see that the value of dname doesn't appear



select xmlelement("departments", xmlagg( xmlelement(dname, xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <DNAME> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </DNAME> <DNAME> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </DNAME> <DNAME> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </DNAME> <DNAME> <employee></employee> </DNAME> </departments>

So, as we can see, the value of the dname column has not been interpreted and used for the tag.  The query is simply using the string DNAME instead.   The problem now becomes "how can we force our query to use the value of dname as an XML tag?"

And, of course, Oracle have given us a solution - the evalname keyword will tell the query that the expression following it is to be evaluated and that the result of that evaluation should be used as the XML tag.  Armed with this knowledge, we'll now make a small change to the query


select xmlelement("departments", xmlagg( xmlelement(evalname lower(dname), xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <accounting> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </accounting> <research> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </research> <sales> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </sales> <operations> <employee></employee> </operations> </departments>

So, as we can see, the value of the expression following the evalname keyword has been used as the XML tag.

Of course, the above example is a simple example to illustrate the use of evalname.
The actual problem that I was solving involved calculating multiple tag values according to a reasonably complex piece of business logic that was implemented via PL/SQL packages.  These tag values were then passed as parameters to the procedure generating the XML.  From there it was easy to just use evalname parameter_name for the dynamic tag names.

The above was run and tested on the Oracle Autonomous Database Cloud using an "Always Free" database that is running database version 19c at the time of writing.

I hope that this is useful for some of you.  Happy XMLing with SQL and PL/SQL!




14 December 2020

Ensuring that XMLTYPE to JSON transformations create JSON arrays - even when there is only a single element.

Recently, my colleagues and I were transforming a large quantity of XML to JSON via the APEX_JSON package.  We wanted any XML tag that was repeated more than once to be transformed into a JSON array.  However there was an issue with repeatable tags, whenever there was only one instance of the element then it wasn't created as an array.  

Here's a very simplified example to illustrate the issue.

<parents>
  <parent>
    <name>Aidan</name>
    <children>
      <child>Aoife</child>    <=== two entries
      <child>Fionn</child>
    </children>
  </parent>
  <parent>
    <name>Eamon</name>
    <children>
      <child>Saoirse</child>  <=== a single entry
    </children>
  </parent>
</parents>

When an XML document such as the one above is converted to JSON, the two children of Aidan are represented as an array containing two elements - but the single child of Eamon is not.  
This can cause some issues for JSON parsers consuming the data, as they now have to cater for two scenarios (with-array and without-array) to correctly extract the data. 
[
   {
      "name":"Aidan",
      "children":[         <=== this is what we want, an array.
         "Aoife",
         "Fionn"
      ]
   },
   {
      "name":"Eamon",
      "children":{         <=== we wanted an array here too :(
         "child":"Saoirse"
      }
   }
]

This piece of PL/SQL code illustrates the issue.
set serveroutput on
declare
  v_xml    xmltype;
  v_json   clob;
begin
  v_xml := xmltype(
  '<parents>
      <parent>
        <name>Aidan</name>
        <children>
          <child>Aoife</child>
          <child>Fionn</child>
        </children>
      </parent>
      <parent>
        <name>Eamon</name>
        <children>
          <child>Saoirse</child>
        </children>
      </parent>
    </parents>');

  apex_json.initialize_clob_output;
  apex_json.write(v_xml);
  v_json := apex_json.get_clob_output;
  apex_json.free_output;  
  
  dbms_output.put_line(v_json);
end;
/
[
 {"name":"Aidan",
  "children":["Aoife","Fionn"]},    <=== is an array
 {"name":"Eamon",
  "children":{"child":"Saoirse"}}   <=== is not an array :(
]
We tried several ways to work around this. For example, one unsatisfactory solution that we tried was to create of an empty <child/> tag whenever there was only one entry and then to subsequently strip it out of the resulting JSON.  

However, a quick communication with the ever-responsive APEX Development Team yielded a more elegant solution... 

Apparently, there are some naming conventions used by the DB XML to JSON generators.  Amongst these are :
  • if an XML node name is "rowset", then it always maps to an JSON array. 
  • if an XML node has a sub-node that ends in "_row", then it also always maps to an JSON array.
Armed with this knowledge, we modified our XML slightly and renamed the <child> tag to <child_row> so that our PL/SQL example now becomes:
set serveroutput on
declare
  v_xml    xmltype;
  v_json   clob;
begin
  v_xml := xmltype(
  '<parents>
      <parent>
        <name>Aidan</name>
        <children>
          <child_row>Aoife</child_row>
          <child_row>Fionn</child_row>
        </children>
      </parent>
      <parent>
        <name>Eamon</name>
        <children>
          <child_row>Saoirse</child_row>
        </children>
      </parent>
    </parents>');

  apex_json.initialize_clob_output;
  apex_json.write(v_xml);
  v_json := apex_json.get_clob_output;
  apex_json.free_output;  
  
  dbms_output.put_line(v_json);
end;
/
[ 
 {"name":"Aidan",
  "children":["Aoife","Fionn"]},  <=== is an array
 {"name":"Eamon",
  "children":["Saoirse"]}         <=== is also an array!!
]

Now we can consistently generate arrays without having to resort to complex pre- and post-processing.  The only little bit of pre-processing that we need to do is ensuring that any XML tags that must become JSON arrays end with "_row".

Many thanks to the great APEX Development Team and especially to Christian Neumueller for the support and help that they gave us for this issue!!

The above was run and tested on the Oracle Autonomous Database Cloud using an "Always Free" database that is running database version 19c at the time of writing.

15 September 2020

Making SQL Developer's UI use your preferred language

Recently I was using Oracle's SQL Developer at a client site where all the virtual PCs were configured for a non-English language and my SQL Developer was choosing this language by default for the User Interface.

I'm so used to working with SQL Developer in English that I found it a little distracting to read the menu options, etc. in a different language - even if it's a language that I use on a daily basis.

After a short time searching online I came across this quick fix written by Matthias Karl Schulz in 2011 - and it is essentially still valid today in SQL Developer 20.2.  

To summarise, here's what you need to do:

  1. Find the sqldeveloper.conf file inside your SQL Developer installation. 

    On a PC I found this at
    ...sqldeveloper\sqldeveloper\bin\sqldeveloper.conf

    On MacOS it's at /Applications/SQLDeveloper.app/Contents/Resources/sqldeveloper/sqldeveloper/bin/sqldeveloper.conf
    (Note: On MacOS you should right click on the SQLDeveloper.app file and choose "Show Package Contents" to open these directories.)

  2. For English, just add the following line to the file.  For other languages, you can replace the "=en" by the ISO 639-1 two-letter code for whichever language you wish to force SQL Developer to use.  I've tried German (de) and Spanish (es) just to see how it looks and it works just fine.

    AddVMOption -Duser.language=en

  3. Save the file

  4. Close SQL Developer if it is already open

  5. Restart SQL Developer. 
And there you have it, SQL Developer will now open in the language of your choice.  
Not all languages are available.  For example, I tried the Irish language ("ga" in ISO 639-1) but, as I expected, it wasn't available (yet).

09 August 2019

3 ways to synchronize your Oracle Text indexes

One of my favourite features of the Oracle database is Oracle Text.  In this blog post I'll discuss different approaches to the synchronisation of Oracle Text indexes.

Oracle Text indexes are synchronized via a queue of pending updates.  Whenever the rows upon which the index is built are inserted or changed, those rows are added to the queue.  This queue is then processed according to the synchronization method that you defined at index creation time.  This means that the synchronization is independent of your transaction,  i.e. your transaction completes without waiting for the index synchronization.

It is important to note that the index is not fully recreated when synchronized but is incrementally updated.  Therefore, updates or insertions of a few rows will be synchronised very quickly but an update or insertion of, let's say, 800'000 rows will take significantly longer to synchronize.

You can view the queue of pending rows using the ctx_user_pending view.

To see how many rows are pending for each index:
select pnd_index_name, count(*)
  from ctx_user_pending
 group by pnd_index_name;
When creating your Oracle Text indexes, you have three ways to specify how you want them to be synchronized
  • manually (this is the default if no syncing method is specified)
create index txt_index on my_table (text_col)
indextype is ctxsys.context;
  • on commit - this will start synchronizing immediately after the transaction is committed
create index txt_index on my_table (text_col)
indextype is ctxsys.context
parameters ('sync (on commit) ');
  • at regular intervals (needs CREATE JOB) privilege - this example syncs hourly
create index txt_index on my_table (text_col)
indextype is ctxsys.context
parameters ('sync (every "sysdate+(1/24)")');
Each of these methods has its advantages and disadvantages.  For example, if you can live with an index that is synchronized daily and want to minimise the database load during working hours, maybe you might consider syncing at a regular daily interval.

Similarly, if you have a lightish load and just have a few changes at a time, then maybe an "on commit" synchronization is the right one for you.  In my experience, the majority of Oracle Text applications have (sometimes wrongly) opted for this.

I hope that this has explained these three options in a quick and simple way.  The well-written Oracle Text Developer's Guide contains a lot of detail on how to manage your Oracle Text indexes.