Tuesday, March 6, 2012

About TRAN, CURSOR and @@ERROR

Please take a look at my code below:
CREATE PROCEDURE ...
....
@.userId int,
....
AS
BEGIN TRAN
DECLARE @.confKey varchar(20), @.confValue varchar(20)
DECLARE curs_conf CURSOR LOCAL FAST_FORWARD FOR SELECT [key], value FROM ...
OPEN curs_conf
FETCH NEXT FROM curs_conf INTO @.confKey, @.confValue
WHILE @.@.FETCH_STATUS = 0 BEGIN
DELETE FROM tab_user_conf WHERE [user_id] = @.userId AND conf_key = @.confKey
IF @.@.ERROR <> 0 BEGIN
ROLLBACK
CLOSE curs_conf
DEALLOCATE curs_conf
RETURN
END
IF @.confValue <> '' BEGIN
INSERT INTO tab_user_conf ([user_id], conf_key, conf_value) VALUES
(@.userId, @.confKey, @.confValue)
IF @.@.ERROR <> 0 BEGIN
ROLLBACK
CLOSE curs_conf
DEALLOCATE curs_conf
RETURN
END
END
FETCH NEXT FROM curs_conf INTO @.confKey, @.confValue
END
CLOSE curs_conf
DEALLOCATE curs_conf
COMMIT
This code works but is the way it catches errors correct?
Do I have to test @.@.ERROR elsewhere?
Is it necessary to close and deallocate the cursor before returning when an
error occurs?
Is it better to rollback before or after closing and deallocating the
cursor?
Thanks for answering my questions.
Henri
Why use a cursor?
Just use one statement for the insert and one for the delete.
"Henri" wrote:

> Please take a look at my code below:
> CREATE PROCEDURE ...
> ....
> @.userId int,
> ....
> AS
> BEGIN TRAN
> DECLARE @.confKey varchar(20), @.confValue varchar(20)
> DECLARE curs_conf CURSOR LOCAL FAST_FORWARD FOR SELECT [key], value FROM ...
> OPEN curs_conf
> FETCH NEXT FROM curs_conf INTO @.confKey, @.confValue
> WHILE @.@.FETCH_STATUS = 0 BEGIN
> DELETE FROM tab_user_conf WHERE [user_id] = @.userId AND conf_key = @.confKey
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK
> CLOSE curs_conf
> DEALLOCATE curs_conf
> RETURN
> END
> IF @.confValue <> '' BEGIN
> INSERT INTO tab_user_conf ([user_id], conf_key, conf_value) VALUES
> (@.userId, @.confKey, @.confValue)
> IF @.@.ERROR <> 0 BEGIN
> ROLLBACK
> CLOSE curs_conf
> DEALLOCATE curs_conf
> RETURN
> END
> END
> FETCH NEXT FROM curs_conf INTO @.confKey, @.confValue
> END
> CLOSE curs_conf
> DEALLOCATE curs_conf
> COMMIT
> This code works but is the way it catches errors correct?
> Do I have to test @.@.ERROR elsewhere?
> Is it necessary to close and deallocate the cursor before returning when an
> error occurs?
> Is it better to rollback before or after closing and deallocating the
> cursor?
> Thanks for answering my questions.
> Henri
>
>
|||In a transaction you will usually want to test for @.@.ERROR after every
statement that manipulates data. As Nigel has said however, there is no
obvious reason to use a cursor here at all. I'm not clear just what this
code is supposed to do so if you need more help we'll need some more
information. Refer to: http://www.aspfaq.com/etiquette.asp?id=5006
David Portas
SQL Server MVP
|||Sorry I had removed the part after FROM.
It's
DECLARE curs_conf CURSOR LOCAL FAST_FORWARD FOR SELECT [key], value FROM
myDB.dbo.getKeyValueTable(@.confText)
I'm using a table variable and it can be long to build it.
I'm using a cursor because if I don't I would have to build the same table
variable twice (once to delete the records, once to insert them again if
needed)
As for SQL SERVER documentation, it's not possible to assign a table (SET
@.myTable = funcFillTable(...)) so I don't know any better way than using a
cursor...
Henri
"Nigel Rivett" <sqlnr@.hotmail.com> a crit dans le message de
news:F1E11350-C8FE-4572-96CC-7467CB3ADBDB@.microsoft.com...[vbcol=seagreen]
> Why use a cursor?
> Just use one statement for the insert and one for the delete.
> "Henri" wrote:
...[vbcol=seagreen]
@.confKey[vbcol=seagreen]
an
>
|||My procedure is getting a key value string like KEY:VALUE;KEY:VALUE;etc.
from client
and a function converts it in a table variable of key value records, that my
cursor uses.
The aim of this procedure is to update an "user configuration" table,
removing, updating and inserting configuration keys and values as specified
by the client.
I thought it would be clever to delete the keys and insert them again when
needed with the updated values. That prevents from testing for each key, if
it is already present in the base, and if it has to be inserted or updated.
As I said to nigel, the cursor is used so that I don't have to build the
key-value table variable twice.
Am I missing something?
Henri
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> a crit dans le
message de news:za2dncKkk8fDBC7cRVn-qA@.giganews.com...
> In a transaction you will usually want to test for @.@.ERROR after every
> statement that manipulates data. As Nigel has said however, there is no
> obvious reason to use a cursor here at all. I'm not clear just what this
> code is supposed to do so if you need more help we'll need some more
> information. Refer to: http://www.aspfaq.com/etiquette.asp?id=5006
> --
> David Portas
> SQL Server MVP
> --
>
>
|||This is an example of why careful separation of database and application
layers is important. You shouldn't need to pass delimited lists into the
database. You've introduced a non-relational structure (the delimited list),
created some procedural code around it (the function) which has then forced
you to implement another compromise (the cursor) to work around the
function's perceived limitations!
It seems that what you actually need is an UPDATE followed by an INSERT,
something like the following. Call this for each of your key-value pairs.
Loops and string parsing are much easier and more efficient in client-code.
CREATE PROCEDURE usp_conf_insert_update
(@.key VARCHAR(10), @.value INTEGER)
AS
UPDATE tab_user_conf
SET conf_value = @.value
WHERE conf_key = @.key
IF @.@.ROWCOUNT=0
INSERT INTO tab_user_conf (conf_key, conf_value)
SELECT @.key, @.value
GO
Of course, tab_user_conf looks suspiciously like an unnormalized list rather
than a table... but that's another discussion :-)
Hope this helps.
David Portas
SQL Server MVP
|||>> I thought it would be clever to delete the keys and insert them again when
needed with the updated values.
That's usually not a good idea. Only update data when necessary or you can
get into problems with RI, triggers and tr logs.
You can use the function to insert into a temp table for the data updates.
Then delete any records that are not needed then insert any new ones.
"Henri" wrote:

> My procedure is getting a key value string like KEY:VALUE;KEY:VALUE;etc.
> from client
> and a function converts it in a table variable of key value records, that my
> cursor uses.
> The aim of this procedure is to update an "user configuration" table,
> removing, updating and inserting configuration keys and values as specified
> by the client.
> I thought it would be clever to delete the keys and insert them again when
> needed with the updated values. That prevents from testing for each key, if
> it is already present in the base, and if it has to be inserted or updated.
> As I said to nigel, the cursor is used so that I don't have to build the
> key-value table variable twice.
> Am I missing something?
> Henri
>
> "David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> a écrit dans le
> message de news:za2dncKkk8fDBC7cRVn-qA@.giganews.com...
>
>
|||Thanks for your help David
I had always thought that multiple client-server access was slower than 1
access with a string parsed on the server
Is string parsing that slow in SQL SERVER?
Thanks again
Henri
"David Portas" <REMOVE_BEFORE_REPLYING_dportas@.acm.org> a crit dans le
message de news:ibCdnWWyc6JvOi7cRVn-qw@.giganews.com...
> This is an example of why careful separation of database and application
> layers is important. You shouldn't need to pass delimited lists into the
> database. You've introduced a non-relational structure (the delimited
list),
> created some procedural code around it (the function) which has then
forced
> you to implement another compromise (the cursor) to work around the
> function's perceived limitations!
> It seems that what you actually need is an UPDATE followed by an INSERT,
> something like the following. Call this for each of your key-value pairs.
> Loops and string parsing are much easier and more efficient in
client-code.
> CREATE PROCEDURE usp_conf_insert_update
> (@.key VARCHAR(10), @.value INTEGER)
> AS
> UPDATE tab_user_conf
> SET conf_value = @.value
> WHERE conf_key = @.key
> IF @.@.ROWCOUNT=0
> INSERT INTO tab_user_conf (conf_key, conf_value)
> SELECT @.key, @.value
>
> GO
> Of course, tab_user_conf looks suspiciously like an unnormalized list
rather
> than a table... but that's another discussion :-)
> Hope this helps.
> --
> David Portas
> SQL Server MVP
> --
>
>
|||So creating a temp table is better that using a table variable or a cursor?
It's sometime difficult to know what is faster and what is not.
Lucky you were here, thanks again :-)
Henri
"Nigel Rivett" <sqlnr@.hotmail.com> a crit dans le message de
news:B34C01FB-45DE-4A64-9FA1-DE66A293008A@.microsoft.com...[vbcol=seagreen]
when[vbcol=seagreen]
> needed with the updated values.
> That's usually not a good idea. Only update data when necessary or you can
> get into problems with RI, triggers and tr logs.
> You can use the function to insert into a temp table for the data updates.
> Then delete any records that are not needed then insert any new ones.
> "Henri" wrote:
that my[vbcol=seagreen]
specified[vbcol=seagreen]
when[vbcol=seagreen]
if[vbcol=seagreen]
updated.[vbcol=seagreen]
no[vbcol=seagreen]
this
>
|||It is certainly better than using a cursor.
"Henri" wrote:

> So creating a temp table is better that using a table variable or a cursor?
> It's sometime difficult to know what is faster and what is not.
> Lucky you were here, thanks again :-)
> Henri
> "Nigel Rivett" <sqlnr@.hotmail.com> a écrit dans le message de
> news:B34C01FB-45DE-4A64-9FA1-DE66A293008A@.microsoft.com...
> when
> that my
> specified
> when
> if
> updated.
> no
> this
>
>

No comments:

Post a Comment