After introducing bulk collect in Oracle it is reduced the context switch between PL/SQL and SQL execution engines when selecting data from the Oracle database. If you need to fetch multiple records from a database this is a great oracle tuning tip to improve the query performance.
bulk collect in Oracle able to SELECT statements or Cursers in Oracle. Values are retrieved in a single fetch. Rather use of multiple database calls with context switches.
BULK COLLECT: There are Select statements or select Curser fetch in a single call therefore improves the speed by reducing a lot of context switches. Below is a Notation for bulk collect in Oracle.
-- SELECT <columns> BULK COLLECT INTO bulk_varaible FROM <table name>; FETCH <cursor_name> BULK COLLECT INTO <bulk_varaible >; --
Bulk Collect and FORALL in Oracle
Normally we use FORALL with BUIK COLLECT but this article focuses on bulk collect into in Oracle. But FORALL reduces multiple contexts by switching between PL/SQL engine and SQL engine on DML operations like INSERT, UPDATE, and DELETE. This is discussed in a separate article with runtime figures and numbers.
Context switches and performance
Oracle code is not as simple as just for SQL queries. In most cases, the PLSQL program includes both PL/SQL and SQL statements. PL/SQL engine executes pl/SQL blocks and SQL engine executes SQL statements.
When executing a PL/SQL block if the engine encounters a SQL statement, then stop the PLSQL engine and pass SQL statements to the SQL engine. Then SQL engine executes the statement and returns the information back to the PL/SQL engine. This happens every time once meet a SQL statement inside the PL/SQL block.

This is called Context Switching between Engines. But reduces the overall performance in the code block. This is an increase in LOOPs in Oracle. Because loops are iterating a lot of circles executing SQL and PLSQL statements. FORALL and bulk collect in oracle are great solutions for those kinds of situations execute in a single fetch.
bulk collect into record type in Oracle
Bulk Collect should be placed before the INTO keyword and then values assigned with collection after the INTO keyword. Below are some behaviors on bulk collect in Oracle.
- Below three types of collections can be used to set values
- Associative arrays
- nested tables
- varrays
- Collection values starting from index 1
- No rows then the collection is empty
- Fetch individual collection for a single SELECT statement
bulk collect in Oracle with example
Let’s have a simple example with Bulk Collect. I take the Population Table to insert date and fetch data from there. My problem is to get all the people over 18 years and below age 60 and increase tax by 5% for each person.
Below is the sample table and insert script. I hope to insert 1000000 records.
-- CREATE TABLE population_table( citizen_id VARCHAR2(50) , citizen_nic VARCHAR2(50) , citizen_tax NUMBER , citizen_city VARCHAR2(50), citizen_name VARCHAR2(100) , citizen_age NUMBER , PRIMARY KEY(citizen_id) ); declare -- insert 1000000 records into database i integer:=0; type namesarray IS VARRAY(10) OF VARCHAR2(50); names namesarray; begin names := namesarray('City_1', 'City_2', 'City_3', ' City_4', 'City_5','City_6',' City_7','City_8','City_9','City_10'); LOOP i:= i+1; INSERT INTO population_table (citizen_id, citizen_nic, citizen_tax,citizen_city, citizen_name,citizen_age) VALUES ('CT-'||i,1000000+i, round(dbms_random.value(100,100000),0), names(round(dbms_random.value(1,10),0)),'CT-name'||i,round(dbms_random.value(1,100),0)); EXIT WHEN i > 999999; END LOOP; end; --
How to update income tax by 5% in a regular way. We normally loop the selected dataset and update each record 1 by 1. We have millions of records in the table and want to execute DML operations.
-- DECLARE BEGIN DBMS_OUTPUT.PUT_LINE('Time Started: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); FOR population_rec IN (SELECT citizen_tax,citizen_id FROM population_table WHERE citizen_age >18 AND citizen_age <60) LOOP UPDATE population_table pop SET pop.citizen_tax = pop.citizen_tax + pop.citizen_tax * 5/100 WHERE pop.citizen_id = population_rec.citizen_id; END LOOP; DBMS_OUTPUT.PUT_LINE('Time Started: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); end; --
Think about how many context switches are in this code. If a query took 500000 records there are over 500000 times of context switches. Below is the time for updating the record took time.
-- Time Started: 08-FEB-2025 06:43:30 Time ended: 08-FEB-2025 06:43:37 --
We can avoid this type of situation by using bulk collect in Oracle. This is not doing a lot of context switches but there are few. That helps to improve the performance of the code block. Let’s see how to write bulk collect with FORALL.
Bulk Collect processing in PL/SQL
The below example shows how to use bulk collect with FORALL. BULK COLLECT fetches multiple records into a single collection.
-- DECLARE TYPE population_ids_tab IS TABLE OF population_table.citizen_id%TYPE INDEX BY PLS_INTEGER; l_population_ids population_ids_tab; BEGIN DBMS_OUTPUT.PUT_LINE('Time Started: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); SELECT citizen_id BULK COLLECT INTO l_population_ids FROM population_table WHERE citizen_age >18 AND citizen_age <60; FORALL indx IN 1 .. l_population_ids.COUNT UPDATE population_table pop SET pop.citizen_tax = pop.citizen_tax + pop.citizen_tax * 5/100 WHERE pop.citizen_id = l_population_ids (indx); DBMS_OUTPUT.PUT_LINE('Time ended: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); end; --
This is a simple query to use BULK COLLECT with FORALL. In this here we can see there is a single call to take information from the database. Then again update all the records in a single call. That means a lot of context switches between SLQL and PL/SQL engines are reduced.
- The initialization section first creates a table of values in the population table for citizen_id
- Create Table of TYPE population_table.citizen_id%TYPE
- The execution section puts all the filtered values INTO the created table.
- FORALL loop insert or Update information by executing BULK update.
By looking at the time difference you can see time reduced to nearly 3 seconds. Earlier it was around 7 seconds. That means in this case it is improved more than 60% of the time. That is huge when you have millions of records in a table.
-- Time Started: 08-FEB-2025 06:55:52 Time ended: 08-FEB-2025 06:55:55 --
bulk collect vs cursor in oracle
This example shows the use of curser and fetch keyword use to fetch data into the created variable. Then FORALL is used to update data like in the above example. The curser initiates in the declare section and is used inside the BEGIN.
-- DECLARE TYPE population_ids_tab IS TABLE OF population_table.citizen_id%TYPE INDEX BY PLS_INTEGER; l_population_ids population_ids_tab; CURSOR population_tab IS SELECT citizen_id FROM population_table WHERE citizen_age >18 AND citizen_age <60; BEGIN DBMS_OUTPUT.PUT_LINE('Time Started: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); OPEN population_tab; FETCH population_tab BULK COLLECT INTO l_population_ids; CLOSE population_tab; FORALL indx IN 1 .. l_population_ids.COUNT UPDATE population_table pop SET pop.citizen_tax = pop.citizen_tax + pop.citizen_tax * 5/100 WHERE pop.citizen_id = l_population_ids (indx); DBMS_OUTPUT.PUT_LINE('Time ended: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); end; Time Started: 08-FEB-2025 10:11:32 Time ended: 08-FEB-2025 10:11:33 --
Bulk Collect LIMIT in Oracle
If there are millions of records and complex functionalities then there can be raise error related to session memory. In such cases, there is a solution for ignoring memory problems. LIMIT keywords are used to create chunks of data.
The below example shows how to use the LIMIT keyword. Then there is a loop to iterate in each chunk of data.
-- DECLARE TYPE population_ids_tab IS TABLE OF population_table.citizen_id%TYPE INDEX BY PLS_INTEGER; l_population_ids population_ids_tab; CURSOR population_tab IS SELECT citizen_id FROM population_table WHERE citizen_age >18 AND citizen_age <60; BEGIN DBMS_OUTPUT.PUT_LINE('Time Started: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); OPEN population_tab; LOOP FETCH population_tab BULK COLLECT INTO l_population_ids LIMIT 60000; EXIT WHEN l_population_ids.count = 0; DBMS_OUTPUT.put_line(l_population_ids.count || ' rows'); FORALL indx IN 1 .. l_population_ids.COUNT UPDATE population_table pop SET pop.citizen_tax = pop.citizen_tax + pop.citizen_tax * 5/100 WHERE pop.citizen_id = l_population_ids (indx); END LOOP; CLOSE population_tab; DBMS_OUTPUT.PUT_LINE('Time ended: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); end; Time Started: 08-FEB-2025 12:10:32 60000 rows 60000 rows 60000 rows 60000 rows 60000 rows 60000 rows 54728 rows Time ended: 08-FEB-2025 12:10:35 --
Here you can see still time is less than the regular way of for loop. Chunks are the size of 60000 and there are loops until the end of the dataset. Then there are SQL PL/SQL context switches but not a million times just a few switches that fix the memory problem in session and also reduce the time as we expect.
bulk collect exceptions in Oracle
SQL%BULK_EXCEPTIONS is used to track all the errors that raise when executing bulk collect in Oracle. If you would like to execute until the LOOP for bulk collect then use SAVE EXCEPTIONS enables an UPDATE, INSERT, or DELETE statement to continue
Then at least there is an error it will forward into %BULK_EXCEPTIONS and later be able to loop %BULK_EXCEPTIONS and able to track the issues on the code block. Below is a simple example for %BULK_EXCEPTIONS
-- DECLARE TYPE population_ids_tab IS TABLE OF population_table.citizen_id%TYPE INDEX BY PLS_INTEGER; l_population_ids population_ids_tab; CURSOR population_tab IS SELECT citizen_id FROM population_table WHERE citizen_age >18 AND citizen_age <60; errors NUMBER; dml_errors EXCEPTION; -- declare our exception PRAGMA EXCEPTION_INIT (dml_errors, -20001); BEGIN DBMS_OUTPUT.PUT_LINE('Time Started: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); OPEN population_tab; LOOP FETCH population_tab BULK COLLECT INTO l_population_ids LIMIT 60000; EXIT WHEN l_population_ids.count = 0; DBMS_OUTPUT.put_line(l_population_ids.count || ' rows'); FORALL indx IN 1 .. l_population_ids.COUNT UPDATE population_table pop SET pop.citizen_tax = pop.citizen_tax + pop.citizen_tax * 5/100 WHERE pop.citizen_id = l_population_ids (indx); END LOOP; CLOSE population_tab; DBMS_OUTPUT.PUT_LINE('Time ended: ' || TO_CHAR(SYSDATE, 'DD-MON-YYYY HH24:MI:SS')); RAISE_APPLICATION_ERROR(-20001, 'loop exeption.'); EXCEPTION -- Figure out what failed and why WHEN dml_errors THEN errors := SQL%BULK_EXCEPTIONS.COUNT; DBMS_OUTPUT.PUT_LINE('Number of errors that failed: ' || errors); FOR i IN 1..errors LOOP DBMS_OUTPUT.PUT_LINE('Error #' || i || ' occurred during '|| 'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX); DBMS_OUTPUT.PUT_LINE('Error message is ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE)); END LOOP; end; --
Here I added code to manually raise an error. raise_application_error allows you to raise custom messages and error codes. The error code number should be between the -20000 to -20999 range.
RECAP
Here there are a few examples related to how to improve performance using bulk collect in Oracle. Here I lot more focus on bulk collect in Oracle but performance gain comes with FORALL with DML operations.
Oracle improves the performance of regular Looping with DML operations with bulk collect. This is because there are fewer context switches between SQL and PL/SQL engines.
Ref: Oracle Doc