Bulk Bind Reminder: ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind

Bulk Bind: Truncated bind

I recently had a good reminder on one of the subtleties of PL/SQL bulk fetches and inserts. I kept getting this error on the FETCH:

ORA-06502: PL/SQL: numeric or value error: Bulk Bind: Truncated Bind  

The procedure purpose is to bulk insert data from a temporary staging table to the real, properly-sized column Load table.  Simple, right?  Except when I use the wrong table types.

My problem?

The elements I was selecting in my cursor did not logically fit into the table type target of my cursor fetch.

I defined my cursor using a query on the temporary staging table (larger VARCHAR2 columns), not the real table (exact-fit VARCHAR2, NUMBER and DATE columns).
But, I defined my table type to receive the cursor results on my real table:


CURSOR c_dl IS
SELECT ...
...
...
FROM my_staging_table stg
WHERE action IN ( 'INSERT','UPDATE');
TYPE my_load_table_tt IS TABLE OF my_load_table%ROWTYPE
INDEX BY PLS_INTEGER;
load_data_arr my_load_table_tt;


BEGIN
OPEN c_dl;
LOOP
FETCH c_dl BULK COLLECT INTO load_data_arr LIMIT 1000; ----- Error  happens here!
FORALL ii IN 1 .. load_data_arr.COUNT
INSERT INTO my_load_table
...
...

The Bulk Bind: Truncated Bind error is because the cursor – with bigger elements, cannot logically fit in the defined array/table type – the columns are too big.

To correct this, I changed the definition of the table type to be a table of the staging table. Then the FETCH proceeds fine, as does the INSERT.

But wait, why didn’t the INSERT fail, since I am Inserting from a (potentially) large-column cursor into the exact-fit table?

The INSERT proceeds because the elements being inserted fit  – the transforms and validations (processed previous to this point in the code) have ensured that all elements are indeed the size they should be in the exact-fit table.  So the INSERT does not complain.

LOG ERRORS REJECT LIMIT UNLIMITED

Now, IF one column in one row was too large, or the wrong format for a date, the entire INSERT would have failed. Unless I used the LOG ERRORS clause, which I did.

IF one or more records is/are bad, I want to load all the good records, and single out the bad. To do this, I used the LOG ERRORS REJECT LIMIT UNLIMITED clause at the end of the INSERT statement:

INSERT INTO my_load_table
VALUES ( ..load_data_arr(ii).col1,
load_data_arr(ii).col2,
load_data_arr(ii).col3,
...
load_data_arr(ii).coln)
LOG ERRORS REJECT LIMIT UNLIMITED;

This LOG ERRORS REJECT LIMIT UNLIMITED clause ensures that all clean rows will insert, and all “bad” rows get logged to an error table, in my case called ERR$MY_LOAD_TABLE.

I created the error log table previously, using the command:
BEGIN
DBMS_ERRLOG.create_error_log (dml_table_name => 'MY_LOAD_TABLE');
END;

We define the staging tables with generous VARCHAR2s on purpose. We want to allow leeway in the incoming data, knowing we can strip punctuation, perform lookups, format dates, and perform other transformations in the validation stage. That means doing all that validation is on us, to make sure the data is in shape for the final insert. Just one way of doing things – there are many other approaches.

My reminder is – especially when writing things quickly – to pay attention to the nuances of the features in use.  I won’t make that mistake for a while now :).

For a full treatment of bulk processing with BULK COLLECT and FORALL, see Steve Feuerstein’s article PL/SQL 101: Bulk Processing with BULK COLLECT and FORALL.

 

Last Day! ODTUG Innovation Award

June 2nd is the LAST DAY for ODTUG Innovation Award nominations.

Make your nomination for the latest, greatest piece of awe-inspiring work using Oracle development technologies today.

It’s simple, go to the ODTUG Innovation Award Nomination page. Tell us about your nominee – or nominate yourself!  Everyone is eligible – make your entry today.

Is is a Cloud hybrid?  An APEX plugin? An open-source project that leverages Oracle tools?  A real clever in-house application of  Oracle JET? A really neat product from Oracle?  Anything goes!

Today is the last chance  to make your 2017 nomination.

The ODTUG Innovation Award honors excellence in creative, effective, and innovative use of Oracle development tools within ODTUG’s supported communities (ADF, APEX, BI, Database, EPM and Career). The goal is to honor developers – individuals or teams – whose passion and creativity shine through in their application of Oracle technology to address real-world problems.  Note that this year we are not excluding anything or anyone, so projects and individuals from all organizations (Oracle, too!) are welcome.

Remember that ODTUG members get to vote this year.  Make your nomination, then vote for it too! Member voting will be combined with judges scoring to determine an overall Innovation Award winner.

Check out the full information on the ODTUG web site here:

ODTUG Innovation Award Nominations

Best wishes to all our nominees!

 

ODTUG Innovation Award ~ Make Your Nomination Now!

Nominations for the 2017 ODTUG Innovation Award are open now through June 2nd.  Make your nomination now, before your Memorial Day weekend.

We are looking for innovative, exceptional, ahead-of-the-curve applications of Oracle development tools that stand out as examples of “the next big thing”, or maybe the really neat next small thing,  in Oracle technology.   Can’t think of anything? Nah, I don’t believe that.  Consider:

  • Do  you have, or does your co-worker have an unusually creative application or combination of new and emerging Oracle tools?
  • Did you find a new open source project that makes your life easier?  Open source projects are eligible, as long as they integrate some Oracle technology.
  • Does your latest project enable faster, smoother, cleaner adoption of an Oracle tool?
  • Are you envying someone else’s work, saying “Wish I thought of that …”?
  • Have you seen a real neat learning tool that aids others in understanding a new Oracle tool?   All projects, large and small, are eligible.
  • Does your organization have an exception program for attracting or training persons new to Oracle technology?
  • Have you created, seen or adopted a new plugin or feature, large or small, that others may not have seen yet?

We are looking for the stuff that makes you say “Wow, great idea …”  or “Neat stuff …” or “wicked cool …” (you can tell where I am from).  We need your help to get as many innovation examples in before the June 2nd deadline.

Now is the time to make your nomination.

The ODTUG Innovation Award honors excellence in creative, effective, and innovative use of Oracle development tools within ODTUG’s supported communities (ADF, APEX, BI, Database, EPM and Career). The goal is to honor developers – individuals or teams – whose passion and creativity shine through in their application of Oracle technology to address real-world problems.  Note that this year we are not excluding anything or anyone, so projects and individuals from all organizations (Oracle, too!) are welcome.

Note that ODTUG members get to vote this year.  Make your nomination, then vote for it too! Member voting will be combined with judges scoring to determine an overall Innovation Award winner.

Check out the full information on the ODTUG web site here:

ODTUG Innovation Award Nominations

Nominations close June 2nd, so act soon!

READ_XLSX_for CLOBs ~ Extending Anton’s AS_READ_XLSX for CLOBs

I had a need to read a series of MS Excel XLSX files from a file server over a period of time – usually chunks of 20-50 files, several times a year. That’s too many files to make an upload interface practical (without annoying the users), and the files come as is from an external agency so there is no control over the contents. That is, this is NOT a case of “save to CSV format” – the requirement is to read MS Excel .XLSX format files as they come.

I was happy to use Anton Scheffer’s AS_READ_XLSX code .. but wary of the VARCHAR2(4000) column limit. Sure enough, a few files in, I hit that limit.

What to do? Modify – adjust – Anton’s code to work with CLOBS, of course. The CLOB-ready package is here, PACKAGE_READ_XLSX_CLOB.

So now I have an AS_READ_XLSX_CLOB package that will read the contents of an XLSX file column into a CLOB.

Wh so much data in one MS Excel column? Not my concern – all I need is to be able to read it into my Oracle database.

The adjusted package is available here, PACKAGE_READ_XLSX_CLOB.

The change?  Simple – remove the VARCHAR2(4000) limiting piece – the DBMS_XLSPROCESSOR.VALUEOF function call.

With a little research I discovered the DMBS_XSLPROCESSOR.VALUEOF function was returning into a VARCHAR2(4000) variable  Understandable for a function call.

BUT the DMBS_XLSPROCESSOR.VALUEOF procedure does not have that limit – well, it would have  depending on how I have the variables that hold the VARCHAR2 OUT parameter.  Using the VALUEOF procedure made it easier to use a CLOB variable to accept the OUT parameter.  Of course there is implicit conversion from the procedure VARCHAR2 OUT parameter to the CLOB variable.

I used a CLOB, because that is how I need to store my data. And yes, a significant number of those spreadsheet table cells are indeed > 4000 characters, meriting the CLOB datatype for our purposes

The change:

Change the string_val declaration from VARCHAR2(4000) to CLOB

  type tp_one_cell is record
    ( sheet_nr number(2)
    , sheet_name varchar(4000)
    , row_nr number(10)
    , col_nr number(10)
    , cell varchar2(100)
    , cell_type VARCHAR2(1)
    , string_val clob        --KC: was --varchar2(4000)
    , number_val number
    , date_val date
    , formula varchar2(4000)
  );

Replace the function calls:

t_strings( t_c ) := dbms_xslprocessor.valueof( dbms_xmldom.item( t_nl, i ), '.' );

with equivalent procedure calls:

dbms_xslprocessor.valueof( dbms_xmldom.item( t_nl, i ), '.', t_strings( t_c ) );

Then I use the AS_READ_XLSX_CLOB package the same as AS_READ_XLSX … for example:

select * from table( as_read_xlsx.read( as_read_xlsx.file2blob( 'DOC', 'Book1.xlsx' ) ) );

or, for ease of reading data in a procedure:

CREATE VIEW MYSPREADHSEET_V AS select * from table( as_read_xlsx.read( as_read_xlsx.file2blob( 'DOC', 'Book1.xlsx' ) ) );

and then reading data is as simple as:

SELECT string_val
  FROM MY_SPREADSHEET_V
 WHERE row_nr = 1
   AND col_nr = 2;

and its many variations.

Now I am reading in XLSX spreadsheets with ease, no matter what size.

Thank you Anton ~