In one of my conversion projects I had to migrate journals from the legacy system to Oracle Apps. The volume of data that I had to migrate was in the order of 10 million records. Hence performance was a very big concern. The solution for me was to run multiple instances of Journal Import programs for different periods simultaneously to decrease the migration time. The problem with running multiple instances of Journal Import program on GL_INTERFACE table was that the instances could lock the table and then the processes would need to be killed. In such a case some of the data might be converted, then we will have the risk of duplicating the journals in a second run and eventually have to serially run the programs and consume more time.
I found that Oracle gives the option of creating my own journal import table to use instead of GL_INTERFACE table. This gave me the option of running Journal Import on multiple tables without the fear of table lock. I have given the code below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
| DECLARE bulk_errors EXCEPTION; PRAGMA EXCEPTION_INIT (bulk_errors, -17594); TYPE t_int_record_type IS TABLE OF gl_interface%ROWTYPE INDEX BY BINARY_INTEGER; P_INT_RECORDS t_int_record_type; v_sob_id NUMBER; v_run_id NUMBER; v_group_id NUMBER; v_request_id NUMBER; v_request_status BOOLEAN; v_phase VARCHAR2 (100); v_status VARCHAR2 (100); v_dev_phase VARCHAR2 (10); v_dev_status VARCHAR2 (100); v_message VARCHAR2 (100); BEGIN -- Create the custom gl_interface table gl_journal_import_pkg.create_table (table_name => 'GL_CUSTOM_INTERFACE' , TABLESPACE => NULL , physical_attributes => NULL , create_n1_index => TRUE , n1_tablespace => NULL , n1_physical_attributes => NULL , create_n2_index => TRUE , n2_tablespace => NULL , n2_physical_attributes => NULL ); -- Process all the records to be inserted into General Ledger -- Populate the records into the collection P_INT_RECORDS -- Now insert the collection into the custom gl_interface table FORALL i IN 1..P_INT_RECORDS. COUNT SAVE EXCEPTIONS INSERT INTO GL_CUSTOM_INTERFACE VALUES p_int_records(i); error_count := SQL%BULK_EXCEPTIONS. COUNT ; FND_FILE.PUT_LINE(FND_FILE.LOG, ' The Count of No. Rows Inserted in GL_CUSTOM_INTERFACE is = ' || TO_CHAR(SQL%ROWCOUNT) ); error_count := SQL%BULK_EXCEPTIONS. COUNT ; TRACE( 'LOG' , ' The Count of No. Rows Inserted in GL_CUSTOM_INTERFACE for is = ' || TO_CHAR(SQL%ROWCOUNT) ); -- Populate the GL_INTERFACE_CONTROL table gl_journal_import_pkg.populate_interface_control (user_je_source_name => 'NEW_SOURCE' , -- As per the source of the journal lines GROUP_ID => v_group_id, -- As per the group id of the journal lines set_of_books_id => v_sob_id, -- As per the SOB id of the journal lines interface_run_id => v_run_id, -- The value will be returned from the API table_name => 'GL_CUSTOM_INTERFACE' , -- We can pass a parameter for table name processed_data_action => NULL ); -- processed_data_action permissible values are given below -- gl_journal_import_pkg.save_data - Keeps the data in the table after journal import has finished successfully -- gl_journal_import_pkg.delete_data - Deletes the data after journal import has finished successfully -- gl_journal_import_pkg.drop_table - Drops the custom interface table after journal import successfully -- NULL - Same as gl_journal_import_pkg.delete_data COMMIT ; -- Execute Journal Import -- We are passing the run_id to ensure that the lines -- are picked up from GL_CUSTOM_INTERFACE table. -- Journal Import looks up the GL_INTERFACE_CONTROL -- table with the run id to identify the journal lines. v_request_id := fnd_request.submit_request (application => 'SQLGL' , -- application short name program => 'GLLEZL' , -- program short name description => NULL , -- program name start_time => NULL , -- start date sub_request => FALSE , -- sub-request argument1 => v_run_id, -- interface run id argument2 => v_sob_id, -- set of books id argument3 => 'Y' , -- error to suspense flag argument4 => NULL , -- from accounting date argument5 => NULL , -- to accounting date argument6 => 'N' , -- create summary flag argument7 => 'O' -- import desc flex flag ); COMMIT ; -- If Journal Import is triggered off then a valid request id -- will be returned else the request id returned will be 0 if v_request_id > 0 THEN -- Wait for Journal Import to complete v_request_status := fnd_concurrent.wait_for_request (request_id => v_request_id, interval => 30, max_wait => 0, phase => v_phase, status => v_status, dev_phase => v_dev_phase, dev_status => v_dev_status, message => v_message ); -- Check for the status of the Journal Import request after completion -- If the program completed successfully then the status will be 0 -- in fnd_concurrent_requests table SELECT DECODE (status_code, 'C' , 0, 1) INTO v_status FROM fnd_concurrent_requests WHERE request_id = v_request_id; -- Drop the custom interface table after Journal Import has been successful gl_journal_import_pkg.drop_table(table_name => 'GL_CUSTOM_INTERFACE' ); else dbms_output.put_line( 'GL Import did not run' ); end if; END ; END ; |
In this example I have illustrated 1 custom interface table. For my conversion project I had used 8 custom interface tables to execute journal import. In addition PL/SQL tables for data processing and cleaning had helped further reduce the time to less than 2 hours from 10 hours initially without multiple tables and collections.
No comments:
Post a Comment