15 October 2025

Creating a Google XML Sitemap using Oracle APEX. #JoelKallmanDay

Oracle APEX makes it easy to build modern web apps, but ensuring that your public pages are properly indexed by Google sometimes requires a little extra work. Recently, I was asked to generate a dynamic XML Sitemap for one such APEX app, and here’s how I went about it.

What is a Google Sitemap, and why is it important?

A Google Sitemap is a simple XML file, usually named sitemap.xml, that lists all the important URLs on your site along with metadata like update dates and priority. It helps search engines like Google discover and understand the structure of your site more efficiently, especially pages that may be hard to find via internal links or other navigation. Submitting it through Google Search Console gives you visibility. In short, it is a practical means to ensure faster, more reliable indexing of your site.

Our APEX application

For simplicity, we’ll ignore the non-essential parts and reduce the app to its three main pages.  The pages that we want to appear on the Google XML Sitemap are the detail pages. We want to be sure that each one of our thousands of articles are indexed.

  • A home page
  • A search page
  • A detailed page for each entry in the database (1,000s of entries)

Understanding the XML structure

The first step was to check Google’s documentation for the required XML structure:

Google Sitemap XML documentation

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>
    <loc>https://www.example.com/foo.html</loc>
    <lastmod>2022-06-04</lastmod>
  </url>
</urlset>

We quickly realized that, for our use-case, we also needed to include a News Sitemap:

Google News Sitemap documentation

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"
    xmlns:news="http://www.google.com/schemas/sitemap-news/0.9">
  <url>
    <loc>http://www.mysite.org/ords/r/press/articles?p_slug=companies-a-b-in-merger-talks</loc>
    <lastmod>2022-06-04</lastmod>
    <news:news>
      <news:publication>
        <news:name>The Example Times</news:name>
        <news:language>en</news:language>
      </news:publication>
      <news:publication_date>2008-12-23</news:publication_date>
      <news:title>Companies A, B in Merger Talks</news:title>
    </news:news>
  </url>
</urlset>


Our Data Source: vw_articles View

For the purposes of this blog, I’ve replaced the actual view that we used with a view called vw_articles containing generated demo data. The view returns a list of news articles along with metadata such as their unique IDs, titles, publication dates, last update dates, etc.  A field called url_slug contains a descriptive but also unique, alternate key in a readable text form (e.g., tree-falls-in-a-forest, prime-minister-slips-on-banana) 

Name             Null?    Type          
---------------- -------- ------------- 
ARTICLE_ID       NOT NULL NUMBER        
TITLE            NOT NULL VARCHAR2(200) 
URL_SLUG         NOT NULL VARCHAR2(200) 
PUBLICATION_DATE NOT NULL DATE          
LAST_MODIFIED    NOT NULL DATE          
AUTHOR                    VARCHAR2(100) 
CATEGORY                  VARCHAR2(50)



Generating the XML per article

Now that we know what XML structure Google expects, let’s generate it dynamically using SQL. We'll start by generating the XML for each article. This will generate one row for each article. Each row will contain the XML describing that article.

select xmlelement("url",
         xmlelement("loc", 'http://www.mysite.org/ords/r/press/articles?p_slug=' 
                           || a.url_slug),
         xmlelement("lastmod", to_char(a.last_modified, 'YYYY-MM-DD')),
         xmlelement("changefreq", 
                     case
                       when a.last_modified > sysdate-15 
                         then 'daily'
                       when a.last_modified > sysdate-60 
                         then 'weekly'
                         else 'monthly'
                     end),
         xmlelement("priority", 
                     case
                       when a.last_modified > sysdate-15 
                         then '0.8'
                       when a.last_modified > sysdate-60 
                         then '0.6'
                         else '0.4'
                     end),
          xmlelement("news:news",
            xmlelement("news:publication",
              xmlelement("news:name",'The Best Daily News '||a.article_id),
              xmlelement("news:language",'en')
             ),  
            xmlelement("news:publication_date", a.publication_date),
            xmlelement("news:title", a.title)
           )
         )
  from vw_articles a;

You may notice that we’ve set each entry’s change frequency (changefreq) and priority dynamically - they are set based on the number of days since the article was last updated.

The SQL statement above gives us something that looks like this for each entry:

<url>
 <loc>http://www.mysite.org/ords/r/press/articles?p_slug=olympic-athletes-break-three-world-records</loc>
 <lastmod>2025-05-18</lastmod>
 <changefreq>monthly</changefreq>
 <priority>0.4</priority>
 <news:news>
  <news:publication>
   <news:name>The Best Daily News 4</news:name>
   <news:language>en</news:language>
  </news:publication>
  <news:publication_date>2025-05-18</news:publication_date>
  <news:title>Olympic Athletes Break Three World Records</news:title>
 </news:news>
</url>


Aggregating entries into a sitemap

Rather than generating multiple XML fragments, we’d like to aggregate each entry into a single XML document. To do this, we first use the XMLAGG function so that all the entries are returned as a single, aggregated row. We would like the entries to be sorted in descending order by last_modified, so we add order by a.last_modified desc to the end of the XMLAGG function.

We’ll enclose these aggregated entries inside a <urlset> tag to create a complete, valid XML document. Finally, we’ll use xmlserialize to convert our XMLTYPE output to an indented CLOB. We’ve also added the XML declaration at the start of the document.

select '<?xml version="1.0" encoding="UTF-8"?>' || chr(10) ||
   xmlserialize(
     content
     xmlelement(
           "urlset", 
           xmlattributes
              ('http://www.sitemaps.org/schemas/sitemap/0.9' as "xmlns",
               'http://www.google.com/schemas/sitemap-news/0.9' as "xmlns:news"),
           xmlagg(
             xmlelement("url",
               xmlelement("loc", 'http://www.mysite.org/ords/r/press/articles?p_slug=' 
                                 || a.url_slug),
               xmlelement("lastmod", to_char(a.last_modified, 'YYYY-MM-DD')),
               xmlelement("changefreq", 
                          case
                            when a.last_modified > sysdate-15 
                              then 'daily'
                            when a.last_modified > sysdate-90 
                              then 'weekly'
                              else 'monthly'
                          end),
               xmlelement("priority", 
                          case
                            when a.last_modified > sysdate-15 
                              then '0.8'
                            when a.last_modified > sysdate-90 
                              then '0.6'
                              else '0.4'
                          end),
                xmlelement("news:news",
                  xmlelement("news:publication",
                    xmlelement("news:name",'The Very Best Daily News'),
                    xmlelement("news:language",'en')
                   ),  
                  xmlelement("news:publication_date", a.publication_date),
                  xmlelement("news:title", a.title)
                 )
               )
            )
        )
     as clob indent size = 2
   ) as xml_output
from vw_articles a;

 For very large datasets (tens of thousands of URLs), consider chunking your sitemap into multiple sitemap files and generating a sitemap index file, as per Google’s recommendation (maximum 50,000 URLs per sitemap).

Final XML

This gives us our final XML, which is one <urlset> that encompasses all the <url> entries that we want to appear on the sitemap:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:news="http://www.google.com/schemas/sitemap-news/0.9">
  <url>
    <loc>http://www.mysite.org/ords/r/press/articles?p_slug=climate-change-policy-update</loc>
<lastmod>2025-08-17</lastmod> <changefreq>daily</changefreq> <priority>0.8</priority> <news:news> <news:publication> <news:name>The Very Best Daily News</news:name> <news:language>en</news:language> </news:publication> <news:publication_date>2025-08-16</news:publication_date> <news:title>Climate Change Policy Update</news:title> </news:news> </url> ... ... thousands of entries ... </urlset>

You can validate your sitemap using the W3C XML Validator or Google’s own testing tools before submitting.

Serving the Sitemap from APEX

  1. Create a blank APEX page with a minimal template. Call this new page sitemap with an alias of sitemap. Make sure that this page is a public page.
Screenshot showing sitemap output XML
  1. I strongly recommend using packages when developing any substantial piece of PL/SQL. For this example, I’ve created a package that contains the SQL statement to generate the XML shown above. The package is called googlesitemaps and you can download it from here.

    Basically, this package's procedure generateMainSitemap selects the XML <urlset> into a CLOB and then outputs that CLOB to the HTTP stream.

   procedure generateMainSitemap is
      l_clob_output  clob;  -- Holds the generated XML sitemap content
   begin
     ...
     ...   (removed for brevity)
     ...   A section that retrieves the XML in a CLOB variable 
     ...   called l_clob_output using the SQL shown earlier.
     ...   You can download this using the link above
     ...
     
     -- Set HTTP response content type to XML
     owa_util.mime_header('application/xml', false);
     
     -- Specify content length for client optimization
     htp.p('Content-Length: ' || dbms_lob.getlength(l_clob_output));
     
     -- Close HTTP headers
     owa_util.http_header_close;
     
     -- Output the XML content without HTML escaping (preserve XML structure)
     apex_util.prn(p_clob => l_clob_output, p_escape => false);

   end generateMainSitemap;
  1. Once the package is in place, create a pre-rendering, before-header process called “Output the XML sitemap”. Inside the begin ... end, this process will contain two lines.

    The first one calls the googlesitemaps.generateMainSitemap procedure to output the XML directly to the HTTP stream.

    The second one calls apex_application.stop_apex_engine to stop all APEX processing dead in its tracks. This ensures that nothing else is output to the HTTP stream.


Submitting the sitemap to the Google Search Console

Now we're ready to submit the sitemap to Google.  
Google will automatically search for a file called "sitemap.xml" but, unfortunately, with the current version of APEX (24.2), we can't add a dot in a page alias.  This means that we can't "pretend" to have a file called "sitemap.xml".  

The workaround to this is to submit your new sitemap directly to Google using the Google Search Console for your site.  The search console looks something like this.


If we click on Sitemaps we’ll come to a page that looks like this:

Here, you simply have to enter the URL of your newly created sitemap page and submit it to Google. It won't be processed immediately, but it will be added to a queue for processing. In my experience, it seems to get processed within two or three days.

Conclusion

So, there we have it. With this approach, you can make your APEX applications more discoverable and ensure your content reaches a wider audience through proper search indexing.  I hope that this blog post helps some of you make your public-facing APEX applications gain a wider audience.

Happy APEXing!!

17 August 2025

Oracle APEX: How Application and Page Processes really execute at the same process point

Debugging a legacy application recently gave us a deeper understanding of the order in which Oracle Application Express executes Application Processes and Page Processes when they share the same process point.

In our case, the point of interest was "Before Header".

When discussing this with a few other developers, some believed the order was:

  1. First: Application processes – in sequence number order within the list of "Before Header" application processes

  2. Then: Page processes – in sequence number order within the list of "Before Header" page processes

Most of us (myself included) had never really checked — and weren't entirely sure.

To confirm, I created a simple, single-page test application with 5 processes. Each process wrote a message to a table called logtable.

  1. Application processes ("On Load: Before Header"):

    • test05 (sequence 5)

    • test15 (sequence 15)

    • test25 (sequence 25)

  2. Page processes ("Pre-rendering – Before Header"):

    • test10 (sequence 10)

    • test20 (sequence 20)

  3. Added a simple Classic Report to display logtable's contents ordered by execution sequence.

Here's a screenshot of the application processes:

Application Processes Screenshot

And here are the page processes:

Page Processes Screenshot 1

Page Process - Before header: sequence 10

Page Process test10 Screenshot

Page Process - Before header: sequence 20

Page Process test20 Screenshot

The output was unambiguous:

Application and page processes are executed together in strict sequence number order. They are interleaved.

Test Results Screenshot

What does this mean for us as Oracle APEX developers?

If your Oracle APEX application has both application and page processes at the same process point, the sequence number alone determines execution order - not whether it's an application or page process.

A practical tip:

If you want all page processes to run *after* all application processes, give them higher starting sequence numbers (e.g., 1010, 1020, …).

Happy APEXing!!

11 August 2025

Using PL/SQL to Seed and Publish Oracle APEX application translations

For multilingual applications, it can become a little tiresome to constantly have to reseed and republish your application while testing multiple languages or debugging language-switch problems.
The other day, while tracking down a tricky language-switch bug, I thought to myself that there must be a faster way to seed and publish.  So I pulled up the documentation for the APEX_LANG API and found these two procedures: apex_lang.seed_translations and apex_lang.publish_application.
Using these two procedures meant that I could write a little script to seed and publish my application (ID 12345) in French and Spanish:
begin
  apex_lang.seed_translations   
     (p_application_id => 12345, p_language => 'fr');
  apex_lang.publish_application 
     (p_application_id => 12345, p_language => 'fr');

  apex_lang.seed_translations
     (p_application_id => 12345, p_language => 'es');
  apex_lang.publish_application 
     (p_application_id => 12345, p_language => 'es');
end;
Note, however, that when running the above outside an APEX environment, this script fails to execute. This because the security group for this workspace needs to be manually set. The error message looks like this:
ORA-20001: Package variable g_security_group_id must be set.
ORA-06512: at "APEX_240200.WWV_FLOW_IMP", line 109
ORA-06512: at "APEX_240200.HTMLDB_LANG", line 328
To set the security group for the workspace, we first have to retrieve the workspace ID and then call the apex_util.set_security_group_id procedure. An additional complication is that a schema may be associated with more than one APEX workspace, but we'll deal with that shortly.
declare
  l_workspace_id  apex_workspaces.workspace_id%type;
begin
  -- get the workspace ID for this schema 
  -- (for now, let's assume there's only 1) 
  select workspace_id 
    into l_workspace_id
    from apex_workspaces;  
  --
  -- set security group for the Oracle APEX workspace 
  apex_util.set_security_group_id (l_workspace_id);
  
  -- seed and publish french
  apex_lang.seed_translations   
     (p_application_id => 12345, p_language => 'fr');
  apex_lang.publish_application 
     (p_application_id => 12345, p_language => 'fr');

  -- seed and publish spanish
  apex_lang.seed_translations
     (p_application_id => 12345, p_language => 'es');
  apex_lang.publish_application 
     (p_application_id => 12345, p_language => 'es');
end;
Well, that takes care of my immediate issue for application 12345 and the French and Spanish translations.  
But.... Wouldn't it be so much better if we could have a solution that would check whether any of my applications need seeding and publishing, and that would then seed and publish them as needed?
We can start building one by querying the APEX data dictionary view apex_application_trans_map.  The requires_synchronization column tells us which language translations need to be seeded and published. This query returns results for all of the potential multiple APEX workspaces associated with the current schema. So it handles the multiple workspace issue alluded to earlier in this blog post.
select t.*
  from apex_application_trans_map t
 where substr(t.requires_synchronization,1,1) = 'Y'
 order by t.primary_application_id,
          t.translated_app_language;
We've got pretty much everything that we need to put it all of this together as a procedure:

1. Let's loop through an SQL query that retrieves every language translation (translated_app_language) that needs to be synchronized for this schema along with its workspace, application name and ID.

2. Within the loop, whenever there's a change of workspace, we'll reset the security group.

3. We'll then seed and publish the application translation.
4. The procedure will accept a single parameter, p_app_id.  If this parameter is passed with a value of null, it will loop through, seed and publish all APEX applications that need to be synchronized.  This will be done across all the APEX workspaces associated with the current schema.
procedure seedAndPublishTranslationsApp (
  p_app_id in apex_application_trans_map.primary_application_id%type
) is
  l_workspace_id apex_workspaces.workspace_id%type := 0;
begin
  --
  -- Loop through all of this schema's workspaces that have applications
  -- with translation languages that require synchronisation.
  --
  for rec_trans in (
    select t.workspace,
           t.primary_application_name,
           t.primary_application_id,
           t.translated_app_language,
           w.workspace_id
      from apex_application_trans_map t
           join apex_workspaces w
             on w.workspace = t.workspace  -- to get workspace ID
     where t.primary_application_id = nvl(p_app_id, t.primary_application_id) -- null means all apps
       and substr(t.requires_synchronization, 1, 1) = 'Y'
     order by w.workspace_id,
              t.primary_application_id,
              t.translated_app_language
  ) loop
  
    -- If there's a change of workspace, (re)set the security group.
    if l_workspace_id <> rec_trans.workspace_id then
      dbms_output.put_line('Workspace: "' || rec_trans.workspace || '":');
      apex_util.set_security_group_id(rec_trans.workspace_id);
      l_workspace_id := rec_trans.workspace_id;
    end if;

    apex_lang.seed_translations(
      p_application_id => rec_trans.primary_application_id,
      p_language       => rec_trans.translated_app_language
    );

    apex_lang.publish_application(
      p_application_id => rec_trans.primary_application_id,
      p_language       => rec_trans.translated_app_language
    );

    dbms_output.put_line(
      'Seeded and published language: "' || rec_trans.translated_app_language ||
      '" for application: ' || rec_trans.primary_application_id || ' - ' ||
      rec_trans.primary_application_name
    );
  end loop;

end seedAndPublishTranslationsApp;
As I don't particularly like having free-floating procedures and functions cluttering up my database, I've made a package, translateTools, that encapsulates this procedure.  You can download the package here.
There are various ways that you can use this procedure.
For example, during development, I like to have a worksheet open that let's me quickly run
  exec translateTools.seedAndPublishTranslationsApp();
whenever I want to. It certainly saves me a lot of clicking within the APEX application development environment.

Another use-case could be to implement it as a regular job that will run nightly to ensure that there are no unpublished translations left lying around.

It's a small solution to a small problem. I hope that some of you will find it useful.

Happy APEXing and Happy Translating!!

04 June 2024

Reading and writing files on Amazon S3 from Oracle Autonomous Database and PL/SQL

In our mixed cloud environment of AWS and Oracle Autonomous databases, sometimes it's necessary to move files/blobs between the database and AWS S3 storage.  This post describes one way of moving them directly between the database and AWS S3 storage.  

We'll start by setting up an AWS user and user group with the permissions needed.  Then we'll create OCI credentials using PL/SQL and the DBMS_CLOUD package.  Finally, we'll move some image blobs over and back between our Oracle database's PL/SQL environment and AWS S3.


Stage One - create an AWS group, a user and get access keys


Create a User Group and assign the permission policy AmazonS3FullAccess to it.

Using the "User Groups -> Create group" wizard, we'll create a new group called S3FullAccess2024.  We'll also attach the Amazon-supplied policy "AmazonS3FullAccess" to this group









Create an S3 User

Login to your AWS Identity and Access Management (IAM) Console and create a new user.  Let's call this user S3PLSQL.  We'll follow the "Create User" wizard that we'll select from the Users option on the Identity and Access Management menu.


Create user - step 1: User details




Create user - step 2:
Set permissions by adding the user to the S3FullAccess2024 group
 that we created earlier




Create user - step 3: Review and create user




Create user - finished: User created



Create and get your Access Key and Secret Key

Next we'll view the user and create an Access Key.  To keep things as simple as possible for now, we'll select "Other" as our use-case.

Create Access key - step 1: Choose a use-case




Create Access key - step 2: Add an optional description




Create Access key - step 3: Retrieve the access key and secret


Important: Don't forget to make sure that you store your key and secret somewhere, as this will be the only time that you will see it.  If you lose your key later on, you'll have to create a new access key.





Stage Two - create an OCI credential using DBMS_CLOUD

The good news is that it's all in an Oracle environment from here on.  

Firstly, from an admin account, make sure that your database user (in this case "myuser") can execute procedures in the dbms_cloud package.
grant execute on dbms_cloud to myuser;

Connect as myuser and, using the Access Key (created earlier at the end of Stage One) as your username and the Secret Key as your password, create a credential using DBMS_CLOUD.CREATE_CREDENTIAL.  Let's call this credential "s3test_cred".

begin
  dbms_cloud.create_credential
  (credential_name => 's3test_cred',
   username => 'A*****************7U',
   password => 'y*********************************V');
end;
/
Once the above is complete, we now disconnect from the admin user and continue by connecting to the myuser database account



Stage Three - access and update an S3 bucket using PL/SQL and DBMS_CLOUD


Firstly, ensure that we're now connected using the myuser account (or whatever the name we gave to it is).  Using the credential that we've just created, we'll try and list the contents of an existing bucket.  In this case, let's use a pre-created bucket called plsql-s3 containing three image files.

Before we start, we'll just take a quick look at the bucket's contents using S3's web interface.  In this example, we can see that it currently contains 3 jpegs.






Let's start by listing all the files in the bucket using DBMS_CLOUD.LIST_OBJECTS.  This can be done with a simple SQL query.
 
We'll pass two parameters, the credentials that we created in Stage 2 and the path to the S3 bucket that we want to list.

-- list the contents of an S3 bucket
select f.*
  from dbms_cloud.list_objects
         ('s3test_cred'
         ,'https://s3.eu-west-1.amazonaws.com/plsql-s3/') f;

Query output - 3 jpegs




Let's get one of the files and read it into a PL/SQL blob variable using the DBMS_CLOUD.GET_OBJECT function.  We'll then check the length of the retrieved blob, just to show ourselves that the get was successful and that the blob is the same size as the file on S3.

-- read a file from S3 into a blob
set serveroutput on
declare
  l_file blob;
begin

  l_file := 
  dbms_cloud.get_object
    (credential_name => 's3test_cred',
     object_uri      
      => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep.jpeg');

  dbms_output.put_line
    ('retrieved blob length is: '||dbms_lob.getlength(l_file));
end;
/

retrieved blob length is: 2622899

PL/SQL procedure successfully completed.


Now we'll read the file into a blob and use the DBMS_CLOUD.PUT_OBJECT procedure to write a copy of this file as a new file to S3 using the retrieved blob.

set serveroutput on
declare
  l_file blob;
begin
  -- read the file from S3 into a blob
  l_file := 
  dbms_cloud.get_object
    (credential_name => 's3test_cred',
     object_uri      
      => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep.jpeg');

  -- using the blob that we read, we'll create a new file on S3
  dbms_cloud.put_object (
    credential_name => 's3test_cred',
    object_uri      
     => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep2.jpeg',
    contents => l_file);

end;
/

PL/SQL procedure successfully completed.


-- let's check if the new file "Sheep2.jpeg" has been created

select f.*
  from dbms_cloud.list_objects
         ('s3test_cred'
         ,'https://s3.eu-west-1.amazonaws.com/plsql-s3/') f;

Results: 






And, using the AWS web interface to verify the contents of our S3 bucket, we'll see that the new file Sheep2.jpeg is now visible in the bucket.



So now we are using the Oracle DBMS_CLOUD package and its LIST_OBJECTS, GET_OBJECT and PUT_OBJECT procedures to list, read and write from and to AWS S3.  

To follow on from here, we could, for example, load files into an Oracle database table using Oracle APEX and then move them to S3 for permanent storage.  The approach and techniques described above can be used in this and many other similar scenarios.

I hope that this is useful to some of you.  Wishing you lots of fun on your Oracle/AWS journeys!
 

P.S. added by popular demand, here's the Sheep Photo... 🐑

taken on a beautiful summer's day on Achill Island, Co. Mayo, Ireland       
© 2024 Niall Mc Phillips












19 August 2022

Correctly sorting data containing accented characters (a.k.a. the Côte d'Ivoire and Türkiye issue)

This will be short and sweet. Hopefully it will be useful to some of you.

As you may or may not know, the country formerly known as Turkey recently changed its official name to Türkiye , even in English. However, when sorting by country name, it should appear between Tunisia and Turkmenistan rather than later in the list. (Official U.N. sorted list can be found here.)

In the weeks following this change, I was asked on at least five separate occasions to look at sorting issues that arose. Please accept my apologies in advance if you already know what I'm about to write. I thought it was common knowledge, but recent experience has shown otherwise.

Below, I'll explain how I correctly sort by country within Oracle. This technique also applies to any other data (names, etc.) that need to take accented characters into account when sorting. I hope it's useful to some of you.

Note: What comes below applies principally to Latin alphabets. I haven’t tested on non-Latin alphabets, but I suspect a similar approach exists.

First, demonstrating the incorrect sort

This is what some use by default — the problem is that accented characters are sorted after non-accented letters.

So we can see that both Côte d'Ivoire and Türkiye are incorrectly sorted here and are placed after their peers.

SQL> select country_name from vw_temp_countries order by country_name;

COUNTRY_NAME
-------------
Cook Islands
Costa Rica
Croatia
Cuba
Curaçao
Cyprus
Czechia
Côte d'Ivoire
Tunisia
Turkmenistan
Tuvalu
Türkiye

13 rows selected.

Now, a correct sort

With this sort, accented characters are taken into account and sorted appropriately. The Oracle NLSSORT function ensures the correct order. The NLS_SORT parameter is set to swiss, as this setting accommodates most Latin-based languages.

SQL> select country_name 
         from vw_temp_countries
         order by nlssort(country_name, 'NLS_SORT = swiss');

COUNTRY_NAME
-------------
Cook Islands
Costa Rica
Côte d'Ivoire
Croatia
Cuba
Curaçao
Cyprus
Czechia
Tunisia
Türkiye
Turkmenistan
Tuvalu

13 rows selected.

That's it — as promised, short and sweet! 😀

19 November 2021

How I got SQL Developer working on a new MacBook Pro (MacOS Monterey - M1 Pro)



Note: See updates 2021-11-23, 2022-02-17 and 2022-06-29 below concerning the use of GraalVM's JDK as an alternative to JDK 17. This is the solution that I am currently using.
I'm hoping that this will be of help to others facing similar issues.

I got my new MacBook Pro (M1 Pro) a few days ago, and then set it up by restoring a Time Machine backup from my MacMini (also M1 architecture) and quickly started using my new laptop.

Very soon, I saw that SQL Developer was crashing. Sometimes it would crash immediately, sometimes after a few minutes.

While looking for a solution, I took a look at some of the forum posts on https://community.oracle.com/tech/developers/categories/sql_developer. Most of what I did comes from what I gleaned reading various posts there. 100% of the credit goes to those that contributed in the forum.

I'll spare you all the various different combinations and attempts that I made that didn't work. The following is what actually worked for me.


Download and install JDK 17

I went to the Oracle Java Downloads page at https://www.oracle.com/java/technologies/downloads/

and downloaded the file: jdk-17.0.1_macos-aarch64_bin.dmg



Opened the .dmg and double-clicked on the JDK 17.0.1.pkg installation package to open the installer.


Followed all the steps to install JDK 17.




After the installation, I checked my folder /Library/Java/JavaVirtualMachines to verify that JDK 17 was installed there.



Change the SQL Developer products.conf file to use JDK 17

To make SQL Developer use the new JDK, I needed to locate and edit the products.conf file for my version of SQL Developer.  These files are found in the hidden.sqldeveloper directory under your home directory.



As you can see here there are a lot of directories from the various versions of SQL Developer that I've installed and used over the years.  My current version is 21.2.1, so this is the directory that I want to change my file in.






I edited the product.conf file using vi and added the following line to make sure that this version of SQL Developer would use the new JDK 17 that I installed.  The SetJavaHome entry sets the Java Home to the directory containing this newly installed version.

SetJavaHome /Library/Java/JavaVirtualMachines/jdk-17.0.1.jdk/Contents/Home


This is what that section of my product.conf file looked like after editing.


Start SQL Developer

When starting SQL Developer, I get an "Unsupported JDK version" page that immediately pops up.  I choose to ignore this warning and click on "Yes" to continue anyway.



The next message that I get concerns JavaFX. 



I was a little worried when I first saw this JavaFX pop-up, but then I read Jeff Smith's post from last November which was reassuring.  According to Jeff, JavaFX is only used in a few screens within SQL Developer, and I can certainly live with this issue for now.


Conclusion

So that's it.  I have a working version of SQL Developer on my MacBook Pro.
It worked for me.  I hope that it works for you or at least gets you moving closer to a solution.

Happy Developing!


Update 2021-11-23 - using GraalVM's JDK 11 as an alternative JDK

In his SQL Developer community forum post, Philipp Salvisberg suggests using the GraalVM's JDK 11 which can be downloaded here.  I have tested his solution and it works for me - even the Welcome Page of SQL Developer works using this method.  Thanks Philipp.




Note: I had to remove the quarantine attribute with the following command: 

sudo xattr -r -d com.apple.quarantine /Library/Java/JavaVirtualMachines/graalvm-ce-java11-21.2.0

 

Update 2022-02-17 - SQL Developer 21.4.2

I have just upgraded to 21.4.2.  I once again edited the product.conf file to point to the GraalVM JDK.  Works just fine for now.

SetJavaHome /Library/Java/JavaVirtualMachines/graalvm-ce-java11-21.2.0/Contents/Home



Update 2022-06-29 - SQL Developer 22.2.0

I have just upgraded to 22.2.0.  Works fine.  No issues to report for now.

30 August 2021

Autonomous DB "You have exceeded the maximum number of web service requests per workspace"

We recently had an experience on an Oracle Autonomous Database where our production instance started giving us lots of errors saying:

ORA-20001: You have exceeded the maximum number of web service requests per workspace. Please contact your administrator.

As these two blog posts tell us, in a self-managed or in-house APEX installation, the page for changing the Maximum Web Service Requests parameter can be found under "Security settings -> Workspace Isolation -> Maximum Web Service Requests".  We can increase the parameter there and fix the issue.

However, on the Autonomous DB these pages are not available.  So the questions become: Can we change this parameter? and if so, how and where?

A further flurry of Googling and a deeper dive into the Oracle Documentation led us to the following page: https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/apex-web-services.html#GUID-DA24C605-384D-4448-B73C-D00C02F5060E

Here we see that there is an APEX instance-level parameter calleMAX_WEBSERVICE_REQUESTS which can be queried and modified using the APEX_INSTANCE_ADMIN package.  The default value of this parameter on an Autonomous DB is currently 50'000 outgoing requests in a rolling 24-hour period.  To run this package you must be connected as the ADMIN user. 

To view the current value of 
MAX_WEBSERVICE_REQUESTS, we can execute the following query that uses the get_parameter function.  

select apex_instance_admin.get_parameter
          ('MAX_WEBSERVICE_REQUESTS') as requests
  from dual;

REQUESTS
--------
50000

To change this value, we can use the SET_PARAMETER procedure:

begin
 apex_instance_admin.set_parameter
     ('MAX_WEBSERVICE_REQUESTS', '250000');  -- increase to 250'000
 commit;
end;
/

If we rerun the preceding query again, we now get a different result that confirms that our change has worked.

select apex_instance_admin.get_parameter
          ('MAX_WEBSERVICE_REQUESTS') as requests
  from dual;

REQUESTS
--------
250000

I hope that this blog post helps someone out there avoid the minor panic that we experienced for a short while today.

Happy APEXing to all!

23 January 2021

Making XML tags dynamic in SQL and PL/SQL

While trying to produce XML using Oracle's native XML functions, I needed some of the XML tags to be dynamic.  To simplify and illustrate the problem that I encountered, I'll show an example that uses the time-tested, traditional EMP and DEPT tables.  

Let's say that we need to produce something like this for all departments.

<departments> <accounting> <employee>Clark</employee> <employee>King</employee> <employee>Miller</employee> </accounting> ... </departments>


Let's start with a short SQL to get an aggregated employee list for each department.  The result is four rows of xmltype - one for each department

select xmlelement("department", xmlagg(xmlelement("employee", initcap(e.ename)))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno;

Result (4 rows):

<department> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </department>

<department> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </department>

<department> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </department>

<department> <employee></employee> </department>


Next we'll aggregate these inside a single outer tag called "departments"


select xmlelement("departments", xmlagg( xmlelement("department", xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <department> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </department> <department> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </department> <department> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </department> <department> <employee></employee> </department> </departments>


Now we'll try to change the <department> tag to have the value of the actual department name by replacing xmlelement("department",  with xmlelement(dname,   and we'll see that the value of dname doesn't appear



select xmlelement("departments", xmlagg( xmlelement(dname, xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <DNAME> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </DNAME> <DNAME> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </DNAME> <DNAME> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </DNAME> <DNAME> <employee></employee> </DNAME> </departments>

So, as we can see, the value of the dname column has not been interpreted and used for the tag.  The query is simply using the string DNAME instead.   The problem now becomes "how can we force our query to use the value of dname as an XML tag?"

And, of course, Oracle have given us a solution - the evalname keyword will tell the query that the expression following it is to be evaluated and that the result of that evaluation should be used as the XML tag.  Armed with this knowledge, we'll now make a small change to the query


select xmlelement("departments", xmlagg( xmlelement(evalname lower(dname), xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <accounting> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </accounting> <research> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </research> <sales> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </sales> <operations> <employee></employee> </operations> </departments>

So, as we can see, the value of the expression following the evalname keyword has been used as the XML tag.

Of course, the above example is a simple example to illustrate the use of evalname.
The actual problem that I was solving involved calculating multiple tag values according to a reasonably complex piece of business logic that was implemented via PL/SQL packages.  These tag values were then passed as parameters to the procedure generating the XML.  From there it was easy to just use evalname parameter_name for the dynamic tag names.

The above was run and tested on the Oracle Autonomous Database Cloud using an "Always Free" database that is running database version 19c at the time of writing.

I hope that this is useful for some of you.  Happy XMLing with SQL and PL/SQL!