InvestorsHub Logo
icon url

Newly2b

09/16/06 12:29 PM

#73610 RE: IH Geek [Dave] #73606

Hi, grub. Thought I'd mention yesterday twice when I hit 'submit post' the msg never posted and I was flipped back to my last post. I note that if I hit 'preview' first before 'submit' I do not have this problem and the msg goes through without issue. Have fun. . .

EDIT: I did not hit 'preview' first with this post, and when I hit 'submit' it went through without a problem. Does that mean the issue is fixed?

Newly
icon url

Bob Zumbrunnen

09/16/06 6:57 PM

#73652 RE: IH Geek [Dave] #73606

I'm going to have to put some thought into doing this as two separate procs. One that only does the insert and another to do the other stuff. The whole transaction, as-is, is pretty big and probably takes too long. The insert is the most important part and probably should be by itself so it's less likely to be part of a failed transaction.

Then another to do all the necessary updates.

At face value, sounds like it'd reduce collisions, though I'm sure it's not the complete solution. Still need try/catch and need to determine and set the isolation level that's best for our scenario.
icon url

Dimension

09/18/06 2:16 PM

#73714 RE: IH Geek [Dave] #73606

Grub/Bob:

Some comments based on my experience...

If the isolation is set to read committed, transaction #2 would not see the msg_number used by the uncommitted trx #1, and uses the same msg_number. A duplicate sequence occurs.

-You can avoid this by doing 'dummy update' at the very beginning of transation #1 on the data that determines the next msg_number. This will cause transaction #2 to wait until that 'dummy update' is committed/rolled back. if your stored proc is lean and quick, it should not have an impact.

-ideally, sequence numbers (msg ids, board-ids, etc) should be stored in a separate table, that way you can preform the technique described above only on this table and way from your main data tables. One of the field on this table could be an 'update_date' field, for example, that you can update at the very beginning of transaction 1

-you obviously would want to keep your transactions small if you are employing locking.

-even though this forces a lock on a row that controls the msg_number, i think it may actually solve a lot of the problems you all are seeing...and done correctly, i don't think you will see any noticable difference in performance. I personally WOULD AVOID the "dirty reads" (isolation level = read uncommitted)...that in my opinion, is risky and you have to really have robust error handling to avoid some potential issues. On larger systems it can also lead to some kludgy code.