Bulk Collect In Oracle
Definition of Bulk Collect:- Bulk binds can improve the performance when loading collections from a queries. The
BULK COLLECT INTO
construct binds the output of the query to the collection. To test this create the following table.
below example will help you to understand how it will improve performance.
create a table to understand how bulk collection is working.
SELECT owner,
object_name,
object_id
FROM all_objects;
Below code will understand how normal code and bulk code will improve performance.
SET SERVEROUTPUT ON
DECLARE
TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE;
l_tab t_bulk_collect_test_tab := t_bulk_collect_test_tab();
l_start NUMBER;
BEGIN
-- Time a regular population.
l_start := DBMS_UTILITY.get_time;
FOR cur_rec IN (SELECT *
FROM bulk_collect_test)
LOOP
l_tab.extend;
l_tab(l_tab.last) := cur_rec;
END LOOP;
DBMS_OUTPUT.put_line('Regular (' || l_tab.count || ' rows): ' ||
(DBMS_UTILITY.get_time - l_start));
-- Time bulk population.
l_start := DBMS_UTILITY.get_time;
SELECT *
BULK COLLECT INTO l_tab
FROM bulk_collect_test;
DBMS_OUTPUT.put_line('Bulk (' || l_tab.count || ' rows): ' ||
(DBMS_UTILITY.get_time - l_start));
END;
/
Regular (42578 rows): 66
Bulk (42578 rows): 4
PL/SQL procedure successfully completed.
SQL>
we can see how much regular time has taken and bulk time has taken, it will improve more performance.Remember that collections are held in memory, so doing a bulk collect from a large query could cause a considerable performance problem
we can use by another method by using Limit option in bulk collections.This gives you the benefits of bulk binds, without hogging all the server memory. The following code shows how to chunk through the data in a large table.
SET SERVEROUTPUT ON
DECLARE
TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE;
l_tab t_bulk_collect_test_tab;
CURSOR c_data IS
SELECT *
FROM bulk_collect_test;
BEGIN
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_tab LIMIT 10000;
EXIT WHEN l_tab.count = 0;
-- Process contents of collection here.
DBMS_OUTPUT.put_line(l_tab.count || ' rows');
END LOOP;
CLOSE c_data;
END;
/
10000 rows
10000 rows
10000 rows
10000 rows
2578 rows
PL/SQL procedure successfully completed.
SQL>
So we can see that with a
LIMIT 10000
we were able to break the data into chunks of 10,000 rows, reducing the memory footprint of our application
from 10g on words PL/SQL compiler converts cursor
FOR LOOP
s into BULK COLLECT
s with an array size of 100.
Below example will give you clear idea how cursor for loop and bulk collection will work .
SET SERVEROUTPUT ON
DECLARE
TYPE t_bulk_collect_test_tab IS TABLE OF bulk_collect_test%ROWTYPE;
l_tab t_bulk_collect_test_tab;
CURSOR c_data IS
SELECT *
FROM bulk_collect_test;
l_start NUMBER;
BEGIN
-- Time a regular cursor for loop.
l_start := DBMS_UTILITY.get_time;
FOR cur_rec IN (SELECT *
FROM bulk_collect_test)
LOOP
NULL;
END LOOP;
DBMS_OUTPUT.put_line('Regular : ' ||
(DBMS_UTILITY.get_time - l_start));
-- Time bulk with LIMIT 10.
l_start := DBMS_UTILITY.get_time;
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_tab LIMIT 10;
EXIT WHEN l_tab.count = 0;
END LOOP;
CLOSE c_data;
DBMS_OUTPUT.put_line('LIMIT 10 : ' ||
(DBMS_UTILITY.get_time - l_start));
-- Time bulk with LIMIT 100.
l_start := DBMS_UTILITY.get_time;
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_tab LIMIT 100;
EXIT WHEN l_tab.count = 0;
END LOOP;
CLOSE c_data;
DBMS_OUTPUT.put_line('LIMIT 100: ' ||
(DBMS_UTILITY.get_time - l_start));
-- Time bulk with LIMIT 1000.
l_start := DBMS_UTILITY.get_time;
OPEN c_data;
LOOP
FETCH c_data
BULK COLLECT INTO l_tab LIMIT 1000;
EXIT WHEN l_tab.count = 0;
END LOOP;
CLOSE c_data;
DBMS_OUTPUT.put_line('LIMIT 1000: ' ||
(DBMS_UTILITY.get_time - l_start));
END;
/
Regular : 18
LIMIT 10 : 80
LIMIT 100: 15
LIMIT 1000: 10
PL/SQL procedure successfully completed.
SQL>
You can see from this example the performance of a regular
FOR LOOP
is comparable to a BULK COLLECT
using an array size of 100.
0 comments:
Post a Comment