04 June 2024

Reading and writing files on Amazon S3 from Oracle Autonomous Database and PL/SQL

In our mixed cloud environment of AWS and Oracle Autonomous databases, sometimes it's necessary to move files/blobs between the database and AWS S3 storage.  This post describes one way of moving them directly between the database and AWS S3 storage.  

We'll start by setting up an AWS user and user group with the permissions needed.  Then we'll create OCI credentials using PL/SQL and the DBMS_CLOUD package.  Finally, we'll move some image blobs over and back between our Oracle database's PL/SQL environment and AWS S3.


Stage One - create an AWS group, a user and get access keys


Create a User Group and assign the permission policy AmazonS3FullAccess to it.

Using the "User Groups -> Create group" wizard, we'll create a new group called S3FullAccess2024.  We'll also attach the Amazon-supplied policy "AmazonS3FullAccess" to this group









Create an S3 User

Login to your AWS Identity and Access Management (IAM) Console and create a new user.  Let's call this user S3PLSQL.  We'll follow the "Create User" wizard that we'll select from the Users option on the Identity and Access Management menu.


Create user - step 1: User details




Create user - step 2:
Set permissions by adding the user to the S3FullAccess2024 group
 that we created earlier




Create user - step 3: Review and create user




Create user - finished: User created



Create and get your Access Key and Secret Key

Next we'll view the user and create an Access Key.  To keep things as simple as possible for now, we'll select "Other" as our use-case.

Create Access key - step 1: Choose a use-case




Create Access key - step 2: Add an optional description




Create Access key - step 3: Retrieve the access key and secret


Important: Don't forget to make sure that you store your key and secret somewhere, as this will be the only time that you will see it.  If you lose your key later on, you'll have to create a new access key.





Stage Two - create an OCI credential using DBMS_CLOUD

The good news is that it's all in an Oracle environment from here on.  

Firstly, from an admin account, make sure that your database user (in this case "myuser") can execute procedures in the dbms_cloud package.
grant execute on dbms_cloud to myuser;

Connect as myuser and, using the Access Key (created earlier at the end of Stage One) as your username and the Secret Key as your password, create a credential using DBMS_CLOUD.CREATE_CREDENTIAL.  Let's call this credential "s3test_cred".

begin
  dbms_cloud.create_credential
  (credential_name => 's3test_cred',
   username => 'A*****************7U',
   password => 'y*********************************V');
end;
/
Once the above is complete, we now disconnect from the admin user and continue by connecting to the myuser database account



Stage Three - access and update an S3 bucket using PL/SQL and DBMS_CLOUD


Firstly, ensure that we're now connected using the myuser account (or whatever the name we gave to it is).  Using the credential that we've just created, we'll try and list the contents of an existing bucket.  In this case, let's use a pre-created bucket called plsql-s3 containing three image files.

Before we start, we'll just take a quick look at the bucket's contents using S3's web interface.  In this example, we can see that it currently contains 3 jpegs.






Let's start by listing all the files in the bucket using DBMS_CLOUD.LIST_OBJECTS.  This can be done with a simple SQL query.
 
We'll pass two parameters, the credentials that we created in Stage 2 and the path to the S3 bucket that we want to list.

-- list the contents of an S3 bucket
select f.*
  from dbms_cloud.list_objects
         ('s3test_cred'
         ,'https://s3.eu-west-1.amazonaws.com/plsql-s3/') f;

Query output - 3 jpegs




Let's get one of the files and read it into a PL/SQL blob variable using the DBMS_CLOUD.GET_OBJECT function.  We'll then check the length of the retrieved blob, just to show ourselves that the get was successful and that the blob is the same size as the file on S3.

-- read a file from S3 into a blob
set serveroutput on
declare
  l_file blob;
begin

  l_file := 
  dbms_cloud.get_object
    (credential_name => 's3test_cred',
     object_uri      
      => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep.jpeg');

  dbms_output.put_line
    ('retrieved blob length is: '||dbms_lob.getlength(l_file));
end;
/

retrieved blob length is: 2622899

PL/SQL procedure successfully completed.


Now we'll read the file into a blob and use the DBMS_CLOUD.PUT_OBJECT procedure to write a copy of this file as a new file to S3 using the retrieved blob.

set serveroutput on
declare
  l_file blob;
begin
  -- read the file from S3 into a blob
  l_file := 
  dbms_cloud.get_object
    (credential_name => 's3test_cred',
     object_uri      
      => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep.jpeg');

  -- using the blob that we read, we'll create a new file on S3
  dbms_cloud.put_object (
    credential_name => 's3test_cred',
    object_uri      
     => 'https://s3.eu-west-1.amazonaws.com/plsql-s3/Sheep2.jpeg',
    contents => l_file);

end;
/

PL/SQL procedure successfully completed.


-- let's check if the new file "Sheep2.jpeg" has been created

select f.*
  from dbms_cloud.list_objects
         ('s3test_cred'
         ,'https://s3.eu-west-1.amazonaws.com/plsql-s3/') f;

Results: 






And, using the AWS web interface to verify the contents of our S3 bucket, we'll see that the new file Sheep2.jpeg is now visible in the bucket.



So now we are using the Oracle DBMS_CLOUD package and its LIST_OBJECTS, GET_OBJECT and PUT_OBJECT procedures to list, read and write from and to AWS S3.  

To follow on from here, we could, for example, load files into an Oracle database table using Oracle APEX and then move them to S3 for permanent storage.  The approach and techniques described above can be used in this and many other similar scenarios.

I hope that this is useful to some of you.  Wishing you lots of fun on your Oracle/AWS journeys!
 

P.S. added by popular demand, here's the Sheep Photo... 馃悜

taken on a beautiful summer's day on Achill Island, Co. Mayo, Ireland       
© 2024 Niall Mc Phillips












19 August 2022

Correctly sorting data containing accented characters (a.k.a. the C么te d'Ivoire and T眉rkiye issue)

This will be short and sweet.  Hopefully it will be useful to some of you,

As you may or may not know, the country formerly known as Turkey recently changed its official name to T眉rkiye, even in English.  However, when sorting by country name, it should appear between Tunisia and Turkmenistan rather than later in the list.  (Official U.N. sorted list can be found here.)

In the weeks following this change, I was asked on at least five separate occasions to look at sorting issues that arose.  Please accept my apologies in advance if you already know what I'm about to write.  I, myself, thought that it was common knowledge, but recent experience has shown otherwise.

Below, I'll explain how I correctly sort by country within Oracle.  This technique can also apply to any other data (names, etc.) that need to take account of accented characters when sorting.  I hope that it's useful to some of you.

Note: What comes below applies principally to Latin alphabets, I have not tested on non-Latin alphabets, but I suspect that a similar approach exists.

First, demonstrating the incorrect sort

This is what some use by default, the problem is that accented characters are sorted after non-accented letters.  

So we can see that both C么te d'Ivoire and T眉rkiye are incorrectly sorted here and are placed after their peers.

SQL> select country_name from vw_temp_countries order by country_name; 

COUNTRY_NAME                                                                     
Cook Islands
Costa Rica
Croatia
Cuba
Cura莽ao
Cyprus
Czechia
C么te d'Ivoire
Tunisia
Turkmenistan
Tuvalu
T眉rkiye

13 rows selected. 


Now, a correct sort

With this sort, accented characters are taken into account and sorted appropriately.  The Oracle NLSSORT function is used to ensure the correct sort order.  The NLS_SORT parameter is set to swiss, as this setting accommodates most languages with Latin characters.

Here we can see that both C么te d'Ivoire and T眉rkiye are correctly sorted and are placed in their correct order.

SQL> select country_name from vw_temp_countries 
          order by nlssort(country_name, 'NLS_SORT = swiss');

COUNTRY_NAME                                                                     
Cook Islands
Costa Rica
C么te d'Ivoire
Croatia
Cuba
Cura莽ao
Cyprus
Czechia
Tunisia
T眉rkiye
Turkmenistan
Tuvalu

13 rows selected. 



That's it, as I promised "short and sweet" 馃榾

19 November 2021

How I got SQL Developer working on a new MacBook Pro (MacOS Monterey - M1 Pro)



Note: See updates 2021-11-23, 2022-02-17 and 2022-06-29 below concerning the use of GraalVM's JDK as an alternative to JDK 17. This is the solution that I am currently using.
I'm hoping that this will be of help to others facing similar issues.

I got my new MacBook Pro (M1 Pro) a few days ago, and then set it up by restoring a Time Machine backup from my MacMini (also M1 architecture) and quickly started using my new laptop.

Very soon, I saw that SQL Developer was crashing. Sometimes it would crash immediately, sometimes after a few minutes.

While looking for a solution, I took a look at some of the forum posts on https://community.oracle.com/tech/developers/categories/sql_developer. Most of what I did comes from what I gleaned reading various posts there. 100% of the credit goes to those that contributed in the forum.

I'll spare you all the various different combinations and attempts that I made that didn't work. The following is what actually worked for me.


Download and install JDK 17

I went to the Oracle Java Downloads page at https://www.oracle.com/java/technologies/downloads/

and downloaded the file: jdk-17.0.1_macos-aarch64_bin.dmg



Opened the .dmg and double-clicked on the JDK 17.0.1.pkg installation package to open the installer.


Followed all the steps to install JDK 17.




After the installation, I checked my folder /Library/Java/JavaVirtualMachines to verify that JDK 17 was installed there.



Change the SQL Developer products.conf file to use JDK 17

To make SQL Developer use the new JDK, I needed to locate and edit the products.conf file for my version of SQL Developer.  These files are found in the hidden.sqldeveloper directory under your home directory.



As you can see here there are a lot of directories from the various versions of SQL Developer that I've installed and used over the years.  My current version is 21.2.1, so this is the directory that I want to change my file in.






I edited the product.conf file using vi and added the following line to make sure that this version of SQL Developer would use the new JDK 17 that I installed.  The SetJavaHome entry sets the Java Home to the directory containing this newly installed version.

SetJavaHome /Library/Java/JavaVirtualMachines/jdk-17.0.1.jdk/Contents/Home


This is what that section of my product.conf file looked like after editing.


Start SQL Developer

When starting SQL Developer, I get an "Unsupported JDK version" page that immediately pops up.  I choose to ignore this warning and click on "Yes" to continue anyway.



The next message that I get concerns JavaFX. 



I was a little worried when I first saw this JavaFX pop-up, but then I read Jeff Smith's post from last November which was reassuring.  According to Jeff, JavaFX is only used in a few screens within SQL Developer, and I can certainly live with this issue for now.


Conclusion

So that's it.  I have a working version of SQL Developer on my MacBook Pro.
It worked for me.  I hope that it works for you or at least gets you moving closer to a solution.

Happy Developing!


Update 2021-11-23 - using GraalVM's JDK 11 as an alternative JDK

In his SQL Developer community forum post, Philipp Salvisberg suggests using the GraalVM's JDK 11 which can be downloaded here.  I have tested his solution and it works for me - even the Welcome Page of SQL Developer works using this method.  Thanks Philipp.




Note: I had to remove the quarantine attribute with the following command: 

sudo xattr -r -d com.apple.quarantine /Library/Java/JavaVirtualMachines/graalvm-ce-java11-21.2.0

 

Update 2022-02-17 - SQL Developer 21.4.2

I have just upgraded to 21.4.2.  I once again edited the product.conf file to point to the GraalVM JDK.  Works just fine for now.

SetJavaHome /Library/Java/JavaVirtualMachines/graalvm-ce-java11-21.2.0/Contents/Home



Update 2022-06-29 - SQL Developer 22.2.0

I have just upgraded to 22.2.0.  Works fine.  No issues to report for now.

30 August 2021

Autonomous DB "You have exceeded the maximum number of web service requests per workspace"

We recently had an experience on an Oracle Autonomous Database where our production instance started giving us lots of errors saying:

ORA-20001: You have exceeded the maximum number of web service requests per workspace. Please contact your administrator.

As these two blog posts tell us, in a self-managed or in-house APEX installation, the page for changing the Maximum Web Service Requests parameter can be found under "Security settings -> Workspace Isolation -> Maximum Web Service Requests".  We can increase the parameter there and fix the issue.

However, on the Autonomous DB these pages are not available.  So the questions become: Can we change this parameter? and if so, how and where?

A further flurry of Googling and a deeper dive into the Oracle Documentation led us to the following page: https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/apex-web-services.html#GUID-DA24C605-384D-4448-B73C-D00C02F5060E

Here we see that there is an APEX instance-level parameter calleMAX_WEBSERVICE_REQUESTS which can be queried and modified using the APEX_INSTANCE_ADMIN package.  The default value of this parameter on an Autonomous DB is currently 50'000 outgoing requests in a rolling 24-hour period.  To run this package you must be connected as the ADMIN user. 

To view the current value of 
MAX_WEBSERVICE_REQUESTS, we can execute the following query that uses the get_parameter function.  

select apex_instance_admin.get_parameter
          ('MAX_WEBSERVICE_REQUESTS') as requests
  from dual;

REQUESTS
--------
50000

To change this value, we can use the SET_PARAMETER procedure:

begin
 apex_instance_admin.set_parameter
     ('MAX_WEBSERVICE_REQUESTS', '250000');  -- increase to 250'000
 commit;
end;
/

If we rerun the preceding query again, we now get a different result that confirms that our change has worked.

select apex_instance_admin.get_parameter
          ('MAX_WEBSERVICE_REQUESTS') as requests
  from dual;

REQUESTS
--------
250000

I hope that this blog post helps someone out there avoid the minor panic that we experienced for a short while today.

Happy APEXing to all!

23 January 2021

Making XML tags dynamic in SQL and PL/SQL

While trying to produce XML using Oracle's native XML functions, I needed some of the XML tags to be dynamic.  To simplify and illustrate the problem that I encountered, I'll show an example that uses the time-tested, traditional EMP and DEPT tables.  

Let's say that we need to produce something like this for all departments.

<departments> <accounting> <employee>Clark</employee> <employee>King</employee> <employee>Miller</employee> </accounting> ... </departments>


Let's start with a short SQL to get an aggregated employee list for each department.  The result is four rows of xmltype - one for each department

select xmlelement("department", xmlagg(xmlelement("employee", initcap(e.ename)))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno;

Result (4 rows):

<department> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </department>

<department> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </department>

<department> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </department>

<department> <employee></employee> </department>


Next we'll aggregate these inside a single outer tag called "departments"


select xmlelement("departments", xmlagg( xmlelement("department", xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <department> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </department> <department> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </department> <department> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </department> <department> <employee></employee> </department> </departments>


Now we'll try to change the <department> tag to have the value of the actual department name by replacing xmlelement("department",  with xmlelement(dname,   and we'll see that the value of dname doesn't appear



select xmlelement("departments", xmlagg( xmlelement(dname, xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <DNAME> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </DNAME> <DNAME> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </DNAME> <DNAME> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </DNAME> <DNAME> <employee></employee> </DNAME> </departments>

So, as we can see, the value of the dname column has not been interpreted and used for the tag.  The query is simply using the string DNAME instead.   The problem now becomes "how can we force our query to use the value of dname as an XML tag?"

And, of course, Oracle have given us a solution - the evalname keyword will tell the query that the expression following it is to be evaluated and that the result of that evaluation should be used as the XML tag.  Armed with this knowledge, we'll now make a small change to the query


select xmlelement("departments", xmlagg( xmlelement(evalname lower(dname), xmlagg(xmlelement("employee", initcap(ename)))))) from dept d left outer join emp e on (e.deptno = d.deptno) group by d.deptno, d.dname;

Result:

<departments> <accounting> <employee>King</employee> <employee>Miller</employee> <employee>Clark</employee> </accounting> <research> <employee>Jones</employee> <employee>Adams</employee> <employee>Smith</employee> <employee>Ford</employee> <employee>Scott</employee> </research> <sales> <employee>Blake</employee> <employee>James</employee> <employee>Turner</employee> <employee>Martin</employee> <employee>Ward</employee> <employee>Allen</employee> </sales> <operations> <employee></employee> </operations> </departments>

So, as we can see, the value of the expression following the evalname keyword has been used as the XML tag.

Of course, the above example is a simple example to illustrate the use of evalname.
The actual problem that I was solving involved calculating multiple tag values according to a reasonably complex piece of business logic that was implemented via PL/SQL packages.  These tag values were then passed as parameters to the procedure generating the XML.  From there it was easy to just use evalname parameter_name for the dynamic tag names.

The above was run and tested on the Oracle Autonomous Database Cloud using an "Always Free" database that is running database version 19c at the time of writing.

I hope that this is useful for some of you.  Happy XMLing with SQL and PL/SQL!




14 December 2020

Ensuring that XMLTYPE to JSON transformations create JSON arrays - even when there is only a single element.

Recently, my colleagues and I were transforming a large quantity of XML to JSON via the APEX_JSON package.  We wanted any XML tag that was repeated more than once to be transformed into a JSON array.  However there was an issue with repeatable tags, whenever there was only one instance of the element then it wasn't created as an array.  

Here's a very simplified example to illustrate the issue.

<parents>
  <parent>
    <name>Aidan</name>
    <children>
      <child>Aoife</child>    <=== two entries
      <child>Fionn</child>
    </children>
  </parent>
  <parent>
    <name>Eamon</name>
    <children>
      <child>Saoirse</child>  <=== a single entry
    </children>
  </parent>
</parents>

When an XML document such as the one above is converted to JSON, the two children of Aidan are represented as an array containing two elements - but the single child of Eamon is not.  
This can cause some issues for JSON parsers consuming the data, as they now have to cater for two scenarios (with-array and without-array) to correctly extract the data. 
[
   {
      "name":"Aidan",
      "children":[         <=== this is what we want, an array.
         "Aoife",
         "Fionn"
      ]
   },
   {
      "name":"Eamon",
      "children":{         <=== we wanted an array here too :(
         "child":"Saoirse"
      }
   }
]

This piece of PL/SQL code illustrates the issue.
set serveroutput on
declare
  v_xml    xmltype;
  v_json   clob;
begin
  v_xml := xmltype(
  '<parents>
      <parent>
        <name>Aidan</name>
        <children>
          <child>Aoife</child>
          <child>Fionn</child>
        </children>
      </parent>
      <parent>
        <name>Eamon</name>
        <children>
          <child>Saoirse</child>
        </children>
      </parent>
    </parents>');

  apex_json.initialize_clob_output;
  apex_json.write(v_xml);
  v_json := apex_json.get_clob_output;
  apex_json.free_output;  
  
  dbms_output.put_line(v_json);
end;
/
[
 {"name":"Aidan",
  "children":["Aoife","Fionn"]},    <=== is an array
 {"name":"Eamon",
  "children":{"child":"Saoirse"}}   <=== is not an array :(
]
We tried several ways to work around this. For example, one unsatisfactory solution that we tried was to create of an empty <child/> tag whenever there was only one entry and then to subsequently strip it out of the resulting JSON.  

However, a quick communication with the ever-responsive APEX Development Team yielded a more elegant solution... 

Apparently, there are some naming conventions used by the DB XML to JSON generators.  Amongst these are :
  • if an XML node name is "rowset", then it always maps to an JSON array. 
  • if an XML node has a sub-node that ends in "_row", then it also always maps to an JSON array.
Armed with this knowledge, we modified our XML slightly and renamed the <child> tag to <child_row> so that our PL/SQL example now becomes:
set serveroutput on
declare
  v_xml    xmltype;
  v_json   clob;
begin
  v_xml := xmltype(
  '<parents>
      <parent>
        <name>Aidan</name>
        <children>
          <child_row>Aoife</child_row>
          <child_row>Fionn</child_row>
        </children>
      </parent>
      <parent>
        <name>Eamon</name>
        <children>
          <child_row>Saoirse</child_row>
        </children>
      </parent>
    </parents>');

  apex_json.initialize_clob_output;
  apex_json.write(v_xml);
  v_json := apex_json.get_clob_output;
  apex_json.free_output;  
  
  dbms_output.put_line(v_json);
end;
/
[ 
 {"name":"Aidan",
  "children":["Aoife","Fionn"]},  <=== is an array
 {"name":"Eamon",
  "children":["Saoirse"]}         <=== is also an array!!
]

Now we can consistently generate arrays without having to resort to complex pre- and post-processing.  The only little bit of pre-processing that we need to do is ensuring that any XML tags that must become JSON arrays end with "_row".

Many thanks to the great APEX Development Team and especially to Christian Neumueller for the support and help that they gave us for this issue!!

The above was run and tested on the Oracle Autonomous Database Cloud using an "Always Free" database that is running database version 19c at the time of writing.

15 September 2020

Making SQL Developer's UI use your preferred language

Recently I was using Oracle's SQL Developer at a client site where all the virtual PCs were configured for a non-English language and my SQL Developer was choosing this language by default for the User Interface.

I'm so used to working with SQL Developer in English that I found it a little distracting to read the menu options, etc. in a different language - even if it's a language that I use on a daily basis.

After a short time searching online I came across this quick fix written by Matthias Karl Schulz in 2011 - and it is essentially still valid today in SQL Developer 20.2.  

To summarise, here's what you need to do:

  1. Find the sqldeveloper.conf file inside your SQL Developer installation. 

    On a PC I found this at
    ...sqldeveloper\sqldeveloper\bin\sqldeveloper.conf

    On MacOS it's at /Applications/SQLDeveloper.app/Contents/Resources/sqldeveloper/sqldeveloper/bin/sqldeveloper.conf
    (Note: On MacOS you should right click on the SQLDeveloper.app file and choose "Show Package Contents" to open these directories.)

  2. For English, just add the following line to the file.  For other languages, you can replace the "=en" by the ISO 639-1 two-letter code for whichever language you wish to force SQL Developer to use.  I've tried German (de) and Spanish (es) just to see how it looks and it works just fine.

    AddVMOption -Duser.language=en

  3. Save the file

  4. Close SQL Developer if it is already open

  5. Restart SQL Developer. 
And there you have it, SQL Developer will now open in the language of your choice.  
Not all languages are available.  For example, I tried the Irish language ("ga" in ISO 639-1) but, as I expected, it wasn't available (yet).

09 August 2019

3 ways to synchronize your Oracle Text indexes

One of my favourite features of the Oracle database is Oracle Text.  In this blog post I'll discuss different approaches to the synchronisation of Oracle Text indexes.

Oracle Text indexes are synchronized via a queue of pending updates.  Whenever the rows upon which the index is built are inserted or changed, those rows are added to the queue.  This queue is then processed according to the synchronization method that you defined at index creation time.  This means that the synchronization is independent of your transaction,  i.e. your transaction completes without waiting for the index synchronization.

It is important to note that the index is not fully recreated when synchronized but is incrementally updated.  Therefore, updates or insertions of a few rows will be synchronised very quickly but an update or insertion of, let's say, 800'000 rows will take significantly longer to synchronize.

You can view the queue of pending rows using the ctx_user_pending view.

To see how many rows are pending for each index:
select pnd_index_name, count(*)
  from ctx_user_pending
 group by pnd_index_name;
When creating your Oracle Text indexes, you have three ways to specify how you want them to be synchronized
  • manually (this is the default if no syncing method is specified)
create index txt_index on my_table (text_col)
indextype is ctxsys.context;
  • on commit - this will start synchronizing immediately after the transaction is committed
create index txt_index on my_table (text_col)
indextype is ctxsys.context
parameters ('sync (on commit) ');
  • at regular intervals (needs CREATE JOB) privilege - this example syncs hourly
create index txt_index on my_table (text_col)
indextype is ctxsys.context
parameters ('sync (every "sysdate+(1/24)")');
Each of these methods has its advantages and disadvantages.  For example, if you can live with an index that is synchronized daily and want to minimise the database load during working hours, maybe you might consider syncing at a regular daily interval.

Similarly, if you have a lightish load and just have a few changes at a time, then maybe an "on commit" synchronization is the right one for you.  In my experience, the majority of Oracle Text applications have (sometimes wrongly) opted for this.

I hope that this has explained these three options in a quick and simple way.  The well-written Oracle Text Developer's Guide contains a lot of detail on how to manage your Oracle Text indexes.

06 June 2019

Setting HTTP Response Status Code 301 (Moved Permanently) in APEX

This will be a short one.  As our public-facing applications age and new applications are developed to replace them, we often have the problem that search engines still send people to the old application pages.

One way of solving this issue, and of giving some feedback to the search engines so that they will update their indexes, is by using the HTTP Response Status Code 301 "Moved Permanently".  Any self-respecting search engine that receives this response code will remove the outdated link from their index.  Also, any users that click on the old link will be automatically redirected to the new one.

Let's take an example: suppose that I have an old APEX application (e.g. app ID 88203) and I want to redirect people to the Universal Theme application (App ID 42) instead of my old page.

The only thing that I need to do is to add a "Before Header" PL/SQL process.  Let's call this process "Permanent Redirect".

This process does three things:
  1. it sets the http status to 301
  2. it redirects to the new url
  3. then it immediately halts all APEX execution by stopping the apex engine

Here's the PL/SQL code for the process:
owa_util.status_line(301,'',false); 
owa_util.REDIRECT_URL('https://apex.oracle.com/pls/apex/f?p=42:100', true);
apex_application.stop_apex_engine();


Before Header process to be created - Permanent Redirect

The Permanent Redirect process definition




One online tool that I found useful while testing this approach was https://httpstatus.io/, it's an online tool to easily check status code, response headers and redirect chains.

I hope that this comes in useful to you one of these days.  It's a quick and relatively simple approach to solving a common problem.

01 February 2019

Solving an issue regarding escaping substitution strings in Oracle APEX

In a recent dashboard-type application, I was using Google Charts for one of my dashboard regions.  This meant that I needed to place some javascript inside the html page header.

The JavaScript was broken into two types

  1. static sections that would never change, and
  2. a dynamic section that was generated by assigning the result of PL/SQL package function to a hidden item named P1_DRAWMAP_JS.
This is the code that was run "before header" to initialise the item.


:P1_DRAWMAP_JS := centutils.genDrawMap (p_language => v(:FSP_LANGUAGE_PREFERENCE));


Once initialised, item P1_DRAWMAP_JS contained a part of a javascript function with about 200 lines of code.  This code also contained some html tags.  A snippet of the generated code is shown here,  although it's not important to understand it for this article, just to note its general structure:

function drawMap() {
var data = new google.visualization.DataTable();
data.addColumn('string', 'Country'); // Implicit domain label col.
data.addColumn('number', 'Value'); // Implicit series 1 data col.
data.addColumn({type:'string', role:'tooltip', p: {html: 'true'}}); //
data.addRows([ [{v:"BA",f:"Bosnia and Herzegovina"},1,"<strong>P029</strong> - entry into force - 09 Aug 2019"],
[{v:"BE",f:"Belgium"},1,"<strong>C187</strong> - entry into force - 31 May 2019"],
[{v:"BG",f:"Bulgaria"},1,"<strong>C131</strong> - entry into force - 20 Mar 2019"],
[{v:"BR",f:"Brazil"},1,"<strong>C189</strong> - entry into force - 31 Jan 2019"],
....
....



The page header looked something like this (below) with the dynamically generated P1_DRAWMAP_JS substitution variable placed between a <script> tag and a </script> tag along with a few other bits of static javascript code.


So everything should work then, right?   Unfortunately not - my region was blank.  When viewing the html source, what I actually saw was that the javascript code contained in my substitution variable had been escaped by APEX.





This is when I decided to search the APEX documentation to see how to get around this problem.  After a few minutes, I came across this documentation on escaping substitution strings.

From the documentation we can see that there are 5 different ways of escaping APEX substitution variables.
They all start with an exclamation mark and immediately follow the name and can also be used in report and interactive grid columns.

!HTML escapes reserved HTML characters
!ATTR escapes reserved characters in a HTML attribute context
!JS   escapes reserved characters in a JavaScript context
!RAW  preserves the original item value and does not escape characters
!STRIPHTML removes HTML tags from the output and escapes reserved HTML characters


The solution to my issue was very simple.  I disabled the escaping of my substitution variable by adding !RAW to the name of the substitution string, so it now became &P1_DRAWMAP_JS!RAW. 

Everything worked!  The result was exactly what I expected and my region looked now looked as it should, like this.


Sometimes the solutions to our problems are very simple. 
And in this case, the APEX development team at Oracle had already anticipated what was needed and provided a mechanism for implementing it.  Another reason that Oracle APEX is one of the best low-code development tools out there!