Sql updating a table from itself
I have another table B containg 10000 records of incremented and edited records of A table. I am using the following codes to append data from B to A.--For incremental/New data-----insert into Aselect * from B where column_name NOT IN(select column_name from B);--For Edited Data-------cursore C_ABselect * from Aminusselect * from AFor R in C_ABloop Update A set....where ..loop End; It's working but taking a huge time/sometimes hang the computer. to update 10,000 rows in a 100,000 row table should take seconds (it'll be a direct function of the number of indexes). The number of rows in both the tables is same after porting.Would u please help me how can i faster my procedure. But there is a posibility of one row inserted twice and another row may not be inserted at all.The records must be processed in order so that for instance, if a record is updated, deleted, inserted, then updated again (not likely, but it *could* happen) those operations should happen in the correct order. I thought that /* NOLOGGING */ skip redo/undo (to simplify). only bulk operations can use it.insert /* append */ can skip logging of the TABLE data since append writes above the high water mark (does not touch ANY existing data).insert /* append */ cannot skip logging of the INDEX data on that table, regardless of the nologging attribute of an index -- since you are mucking about with EXISTING DATA (and a failure in the middle would destroy your DATA! DATE_SERVICE May 13, 2004 - pm UTC no, I mean:create global temporary table gtt( b_elig_key ..., client_member_id ... DATE_SERVICE, 'YYYYMMDD')) May 15, 2004 - pm UTC how many rows to be updatedis eligibility key indexed (are you mass updating an indexed key)is your update bumping into other row level updates.updating millions of rows is a couple minute process ifo column is not indexedo you are not contending for the dataupdating millions of rows is a couple (hour|day|week|month|year) process potentially otherwise.(one thing I forgot to mention I think -- use dbms_stats.set_table_stats to set the numrows in the gtt using sql%rowcount after the insert so the optimizer has a clue) Tom, In the updated table (STG_CLAIM_TRY) all the records will be updated (value or null), around 5,000,000.Note - I didn't design this system, but I have to work with it. Ok, this is the reall query: UPDATE STG_CLAIM_TRY A SETA. ELIGIBILITY_KEY FROM STG_F_ELIGIBILITY_TRY B WHERE A. )A failure in the middle of an append into a table -- harmless, the temporary extents we were writing to just get cleaned up.nologging on an index only affects things like:o create (no existing data)o rebuild (no existing data is touched)see Tom, Thanks for the Clarification of NOLOGGING. ELIGIBILITY_KEY B_ELIG_KEY FROM STG_F_ELIGIBILITY_TRY B, TMP_STG_CLAIM_TRY a WHERE A. PRIMARY KEY )on commit delete rows;once in your database, then to update:insert into that gtt the job of A and B as above (add client_id to the select list) and the update the join of the gtt to the A table. DATE_SERVICEFROM STG_CLAIM_TRY A, STG_F_ELIGIBILITY_TRY B WHERE A. There are no indexes or constraints on STG_CLAIM_TRY. Would u please help me how can i faster my procedure.u have given information 9i it same for oracle8i & dev6i.please help me Thank u very much for ur kind update information is really nice and working very insert----same prolem. What about:create global temporary table gtt( id int primary key, cnt int)on commit delete rows/you'll add that ONCE, it'll become part of your schema forever.... I have another table B containg 10,000 records of incremented and edited records of A table. I am using the following codes to append data from B to A.--For incremental/New data-----insert into Aselect * from B where column_name NOT IN(select column_name from B);--For Edited Data-------cursore C_ABselect * from Bminusselect * from AFor R in C_ABloop Update A set....where ..loop End; It's working but taking a huge time/sometimes hang the computer. That means -- just using math here -- that we have 600 seconds, 12,000 queries to run, 12000/60 = 20, so we are doing 20 per second -- or each query is taking 0.05 cpu seconds to run.0.05 cpu seconds is awesome for a anything 12,000 times and you might have a problem tho! this might be one of the rare times that a temp table can be useful.It isn't doing anything beyond confusing the reader of your code...do you believe the update is slow -- what led you to that particular conclusion. [email protected] select * from t; SNO ITEMCODE VALUE APPLIEDVALUE ---------- -------- ---------- ------------ 1 item1 200 200 2 item2 100 100 3 item3 300 300 4 item4 200 200 5 item5 50 50 6 item6 200 150 7 item7 400 0 7 rows selected.Hi Tom, I have a huge table similar to the following:eno ename dno sal mgr---------------------------------101 A 1 100102 B 1 200103 C 1 300104 D 2 100105 E 2 200---------------------------------Here I want to update the 'mgr' column with 'eno' value having largest 'sal' for each dno. what I want to do is update one column based on the values of 4 other columns like such:t1:recordno, begindatet2: recordno, date1, date2, date3, date4, I've tried to do the following update t1 set t1.begindate = (select greatest(greatest(st.date1, st.date2), greatest(st.date3, st.date4)) as greatest from t2 where t2.recordno = t1.recordno);it just freezes up on me...ideas?
Using cursor is ok, but it brings with a un-toleratable speed when operating on a large table. REGISTRATION where vistemp.registration.bin=vis.registration.bin)/am i right? Whenever anyuser insert/update anything,system date is inserted with them and i am exporting that data by using that system date."AM I IN RIGHT WAY? In my B(big table record-79186) and in S(small table record-12871). Can you please tell me which option performs better if the number of records are in millions. I have a table t1 which is having all orders information.
November 07, 2002 - pm UTC Oh, well -- then you cannot do it in a single update anyway -- as the table being updated would NOT be key preserved (and hence the result of the update would be very ambigous). If there is not match/join then STG_CLAIM_TRY should be null. I changed the index a little so the explain plan is a little different then before, the index on STG_F_ELIGIBILITY_TRY have CLIENT_MEMBER_ID, DATE_EFFECTIVE, DATE_TERMINATION.**************************************************This one take a fraction of a second to show first results SELECT B. DATE_SERVICE May 11, 2004 - pm UTC need ALL rows -- count(*) doesn't do update speed will necessarily be gated by the performance of those queries...based on the really bad plans, I'll guess you are using the RBO? Tom, This is what I did: DROP TABLE PROC_CLAIM_ELIG_JOIN_TMP; CREATE GLOBAL TEMPORARY TABLE PROC_CLAIM_ELIG_JOIN_TMP (ELIGIBILITY_KEY NUMBER, CLIENT_MEMBER_ID NUMBER, DATE_SERVICE DATE)ON COMMIT PRESERVE ROWS; ALTER TABLE PROC_CLAIM_ELIG_JOIN_TMP ADD CONSTRAINT PK_PROC_CLAIM_ELIG_JOIN_TMPPRIMARY KEY(CLIENT_MEMBER_ID, DATE_SERVICE) USING INDEX; INSERT /* APPEND */ INTO PROC_CLAIM_ELIG_JOIN_TMP(ELIGIBILITY_KEY, CLIENT_MEMBER_ID, DATE_SERVICE)SELECT /* ALL_ROWS */ DISTINCT B. The updating session is the only session in the db.
If the table containing the changes can have MORE then one occurrence of the "primary key" of the other table -- no chance for a single statement. ELIGIBILITY_KEY FROM STG_F_ELIGIBILITY_TRY B, stg_claim_try a WHERE A. Analyze, use the CBO and look for nice big juicy HASH JOINS Hi Tom Thankyou very much for your query. DATE_SERVICE May 12, 2004 - pm UTC my concept now, that the join is "fast" is to use a global temporary table with a primary key - insert the results of the select join into it and update the join (which we can do since the gtt will have a proper primary key on it) Tom, I tried that but I think I'm doing something wrong. Im with you on the fact that this update should take no more then few minutes but its not :-) . DATE_SERVICE, 'YYYYMMDD')) May 24, 2004 - pm UTC search this site forora-01555also ask yourself, so, what happens when we crash in the middle of the loop. create table testupdate (sno number(4), itemcode varchar2(8), value number(4), appliedvalue number(4)) insert into testupdate (sno,itemcode,value) values(1,'item1',200); insert into testupdate (sno,itemcode,value) values(2,'item2',100); insert into testupdate (sno,itemcode,value) values(3,'item3',300); insert into testupdate (sno,itemcode,value) values(4,'item4',200); insert into testupdate (sno,itemcode,value) values(5,'item5',50); insert into testupdate (sno,itemcode,value) values(6,'item6',200); insert into testupdate (sno,itemcode,value) values(7,'item7',400); SNO ITEMCODE VALUE APPLIEDVALUE-------- -------- ---------- ------------ 1 item1 200 2 item2 100 3 item3 300 4 item4 200 5 item5 50 6 item6 200 7 item7 400Now: I'm writing a stored procedure in which I have to update the appliedvalue column of the above table.
If you tried:table t1( x int primary key, y int );table t2( x int, y int );insert into t1 values ( 1, 0 );insert into t2 values ( 1, 100 );insert into t2 values ( 1, 200 );thenupdate ( select t1.y t1_y, t2.y t2_y from t1, t2 where t1.x = t2.x ) set t1_y = t2_ywould be "ambigous" -- no way we could know if y would end up with 100 or 200 -- hence we don't even permit it. Hi Tom, As you said in my case scenario I cannot use UPSERT(MERGE) and i have to write a pl/sql in case to achieve -1.) Insert /Update from temporary table to actual table. Surely next time, i will follow your instructions regarding create and insert statements, which helps you to answer quickly. Regardsdmv Tom, I am using the cost based optimizer, I followed your suggestion and analyzed the 2 tables, this is what I get now.****************************************************** to finish. ELIGIBILITY_KEY FROM STG_F_ELIGIBILITY_TRY B, STG_CLAIM_TRY a WHERE A. Is there any other information that I can provide you with to help shade some light on this pain in the neck update? I changed the global temporary table to index organized table, the insert takes minutes and the update never finish (its still running now for about 30 minutes already). DATE_SERVICEFROM STG_CLAIM_TRY A, STG_F_ELIGIBILITY_TRY B WHERE A. can I restart that process or did the programmer not even begin to think about that eventuality? June 15, 2004 - pm UTC 1) b must have a primary key, yes.2) a merge is an UPDATE AND INSERT. For this I have something called the actual Value which is stored in the variable vnum_actual Value.
(Nt- As i am loading data into temporaray table from a legacy system, i want this load to be as fast as possible so i am not putting any constrain on temp table and handling most of data error inside oracle. CREATE TABLE PROC_CLAIM_ELIG_JOIN_TMP (ELIGIBILITY_KEY NUMBER, CLIENT_MEMBER_ID NUMBER, DATE_SERVICE DATE, CONSTRAINT PK_PROC_CLAIM_ELIG_JOIN_TMPPRIMARY KEY(CLIENT_MEMBER_ID, DATE_SERVICE)) ORGANIZATION INDEX INCLUDING ELIGIBILITY_KEY OVERFLOWNOLOGGINGPARALLELALTER TABLE PROC_CLAIM_ELIG_JOIN_TMP ADD CONSTRAINT PK_PROC_CLAIM_ELIG_JOIN_TMPPRIMARY KEY(CLIENT_MEMBER_ID, DATE_SERVICE) USING INDEX; INSERT /* APPEND */ INTO PROC_CLAIM_ELIG_JOIN_TMP(ELIGIBILITY_KEY, CLIENT_MEMBER_ID, DATE_SERVICE)SELECT /* ALL_ROWS */ DISTINCT B. update ( select a1, b1 from a, b where = ) 2 set a1 = b1 3 /And then as you state, for doing this kind of an update. Further, as per your demo a merge is faster than a regular update, when a merge does just an update. where did i show merge being faster than a single update? This value in the variable vnum_actual Value should be distributed among the appliedvalue column as follows.Even the idea create a temporary table holding only primary key and Column b, and then apply cursor to it is slow. "c)if i have a composite key then"---where a.key1=b.key1 and a.key2=b.key2 and----)am i right for both insert/update by given ur advice? Another table summary of orders tt1 which is having current year summation and respective previous year summation columns. Type ------------------------------- -------- ------------ ORDER_NUMBER NUMBER(10) ORDER_DATE DATE CY_ORD_AMT NUMBER PY_ORD_AMT NUMBER Order date is current year date.