Question Optimistic concurrency misses varchar fields

Stonkie

Well-known member
Joined
Sep 12, 2007
Messages
279
Programming Experience
1-3
I am trying to create a very simple typed dataset using the designer in VS2005 to connect to my Article table on an SQL Server 2005 Express database, nothing out of the ordinary. I want to use optimistic concurrency because there will be many user working on that table.

But it seems that when I drag my table on the dataset designer's surface from the server explorer window, it does not create the update and delete queries correctly for concurrency issues detection. There are filters missing on the varchar(MAX) fields so another user may modify the article's content and I won't even know it when I overwrite their change.

This is shown on the screenshots I attached. You can see there is no filter on any of the VARCHAR(MAX) columns.

Is this caused by the full-text search option being disabled? I seems to solve the problem if I modify the DELETE and UPDATE queries manually, but I'm looking to know why it does that. Performance isn't an excuse to cut the corners that rough...
 

Attachments

  • Article table structure.JPG
    Article table structure.JPG
    46.6 KB · Views: 42
  • Article delete query.JPG
    Article delete query.JPG
    153.8 KB · Views: 39
if your field is defined as varchar(max) then the designer might either:

not know what this is, because it's not VARCHAR(<number>)
know that this is a huge field, and isnt good to search and compare from a performance point of view


see what happens if you make a table with VARCHAR 100, 200, 400, 800, 1600 etc... and see if there is a cutoff point for the designer to make the field part of the search.


ps that's one wierd looking opti query. Did the designer do that?
The oracle ones look very different, ORring is done first
 
Last edited:
see what happens if you make a table with VARCHAR 100, 200, 400, 800, 1600 etc... and see if there is a cutoff point for the designer to make the field part of the search.

You are right, whatever length I make my VARCHAR, it always produces the right query except when it's VARCHAR(MAX). If the designer could not know what the type is, I think it wouldn't map to the String type correctly so we can rule this one out. I'm thinking this is a matter of performance, yet performance isn't an issue to me as my fields are rather short.

I was just too lazy to add error handling all over my code to prevent color names or whatever from getting too long. I believe the VARCHAR(MAX) type will behave like any other VARCHAR type for size under 8KB (one page) in terms of performance so I never saw the point to limit the size of the fields. Am I circumventing best practices by doing this?

ps that's one wierd looking opti query. Did the designer do that?
The oracle ones look very different, ORring is done first

Yes, this is the query as I just added the table to the designer. I didn't change anything. It would be simpler if it wasn't for my UpdateId and DisplayOrder nullable columns...

Anyway, I posted this question on the msdn forums and the answer was pretty much that it may be a bug or a by design decision. In any case, there's nothing to be done about it so I'll have to keep editing the queries manually and eventually remember to use Timestamp columns the next time I design a database.
 
You are right, whatever length I make my VARCHAR, it always produces the right query except when it's VARCHAR(MAX). If the designer could not know what the type is, I think it wouldn't map to the String type correctly so we can rule this one out. I'm thinking this is a matter of performance, yet performance isn't an issue to me as my fields are rather short.
Given that VARCHAR(MAX) is up to 2 gigabytes, you can maybe see why it would be a design decision not to bother including string comparisons of this type


Am I circumventing best practices by doing this?
It seems not, though if your varchars arent going to exceed 8kb you can reduce your optimistic sql headache and use varchar(8000) instead of varchar(max)



I'll have to keep editing the queries manually and eventually remember to use Timestamp columns the next time I design a database.
Nothing stopping you adding a timestamp column, or a counter in a trigger. upon every insert set the counter 0, every update, increment it by one. Your optimism can then hinge on that counter being the same as when it was downloaded.. Very similar to a timestamp, i suppose
 
Back
Top