Question Log website visits to a SQL Server 2000 database

M_Dave

New member
Joined
Nov 13, 2008
Messages
3
Programming Experience
3-5
I'm throwing this question out to the community for opinions. I have a large corporate website that is being ported from ASP to ASP.Net. We're adding a new feature: each page visit will be logged to a SQL Server 2000 database. My problem is that I don't want this logging to adversely affect performance - at peak times, the search page of this website alone can have 90 hits per second. For every one of these hits to open a DB connection and update a table will, I suspect, give the website quite a performance hit.

Is there a better way to get these records into a database? I had one idea: instead of logging to the database with each page hit, log the visit to a ADO.Net DataTable, held in memory as an Application variable. I could then code a BackgroundWorker object that would monitor the size of this DataTable, and whenever it got to be, say, 1000 rows, insert all those rows into the database and remove them from the DataTable.

Is this a good solution, though? I'm fine with coding ADO.Net, but I've never done anything which had to deal with the scale of transactions that this site will generate, so I'm inexperienced when it comes to performance management.
 
Just do the insert in the "bad" way. It'll perform better than you think. Don't index the table, or provide a primary key on it.
 
connection pooling, low cost of insert to an indexless table, and the fact that I have an oracle load process that does 1000 inserts per second here
 
A couple more questions..

A couple more questions: corporate database design standards mean that I'm unable to not have a primary key for my table. Will this significantly affect the performance of the index? Question 2 is - does your 1000-row-per-second bulk insert come from individual web transactions? I'm still trying to get an idea of by how much the web server's performance will be affected by a database insert being part of the request servicing.

Again, thanks for the reply. I'm really grateful that you've taken the time to help.
 
A couple more questions: corporate database design standards mean that I'm unable to not have a primary key for my table.
Corporate database design standards like that need to be changed because they are too clumsy and ignornat of the technical needs of the solution. It's about as logical to impose such a restriction as it is to say "No table shall have more than 100 records, or 10 columns and if your data storage requirements exceed this vertically, make another table and union it in or horizontally, make another table and join it in"

Hopefully you can see how stupid this is, and could only be dreamt up by someone who thinks "having hundreds of thousands of rows in a table is bad" but has no technical grasp on why their opinion is thus (or wrong!) :)

Will this significantly affect the performance of the index?
Er.. It's the index that affects the performance of the query.

Youre making a logging table, that will be inserted to thousands of times and queried seldom, probably always in aggregate. That kind of table does not need, and should not have a unique index, because it is a 100% pointless waste of server resources to be able to uniquely identify a particular row, and maintain an N-leaf index for it(where N is the number of rows in the table)

You need to use your coding judgment to make a call on this one; blindly accept the corporate rule, or create the table to service the demands placed on it. Don't take my word for it; go ask other database people whether they would create a unique index (that will probably never be used) on a table that will have an insert:select ratio of thousands to one.

Question 2 is - does your 1000-row-per-second bulk insert come from individual web transactions?
In this context, it makes no difference. It's from a file, read by a windows Service, of millions of customer details every night; as such it would be performing exactly the same as your context. A service (your service is IIS) opens a connection, inserts some data and closes the connection.
In classic insert mode, one record per insert, it gets around 1000 lines/sec. In a performance enhanced insert mode whereby it concatenates 25 rows into one string and passes it to a stored procedure, which then splits the string and performs 25 inserts locally, it does between 6000 and 9000 inserts per second.
Using Oracle's bulk transfer mode, passing an array of 500 records at a time it does 10,000 inserts per second. The slight gain using OBT didnt offset the cost of installing the necessary drivers so it wasnt done. It's a good example of how you can get a performance boost if you find your classic insert mode to be lacking, but I don't think you'll get a similar increase by using your datatable idea, because under the hood it still operates in a broadly similar fashion; you'd need to implement some form of bulk transfer of records but remember also that your data rates are some orders of magnitude lower than discussed here (The cost of implementing a bulk solution may never be repaid in real-world gains)

I'm still trying to get an idea of by how much the web server's performance will be affected by a database insert being part of the request servicing.
Might be an idea to ask Neal, the owner of this forum, roughly how many inserts/updates/deletes/selects it takes to make the forum work. If you want a busier example, ask forums.oracle.com - some forums actually put on the bottom of each page, how many queries and how long they took, usually 5 - 15 and 0.00xxx seconds. You do additionally have the option of implementing your chosen method than stress testing it ;)

The cost of inserting into an indexless table is negligible. The cost of inserting into a table that is maintaining an index is far more significant
 
Last edited:

Latest posts

Back
Top