mikethecoder
New member
- Joined
- Aug 23, 2008
- Messages
- 3
- Programming Experience
- Beginner
I have plenty of programming experience but unfortunately not as much with .NET and a project has been dropped on me last minute. I can code in the language but this issue deals with performance and choosing a best solution... and not knowing the language in and out I do not know which is best.
We have a program that runs 24-7 running imports. It creates a new importer object which then takes a tab separated text file from a UNC path on our server and uses this to form a dataset to represent the data originally in the text file. After the text file data is processed into datasets for each table (4 tables)... one dataset for each table... we then run the .update method on each dataset to insert the data to an access .mdb file. So basically the process is essentially to update an existing access database file based on changes in a tab separated .txt file.
The problem is these imports run very frequently and recently we have someone with a much larger database than ever before. Datasets have been shown to be the cause when running the update. It is using too much server memory and timing out other imports. I was told to come up with an alternative to using datasets.
Any ideas what the best method would be for server performance to avoid storing the data from the txt file in datasets and updating the dataset back to the mdb file? Should the solution, since we need to avoid using any server memory, just be to run multiple SQL insert commands against it for each record?
The average import, across all 4 tables together, processes about 60,000 records. Thanks!!!!!!
We have a program that runs 24-7 running imports. It creates a new importer object which then takes a tab separated text file from a UNC path on our server and uses this to form a dataset to represent the data originally in the text file. After the text file data is processed into datasets for each table (4 tables)... one dataset for each table... we then run the .update method on each dataset to insert the data to an access .mdb file. So basically the process is essentially to update an existing access database file based on changes in a tab separated .txt file.
The problem is these imports run very frequently and recently we have someone with a much larger database than ever before. Datasets have been shown to be the cause when running the update. It is using too much server memory and timing out other imports. I was told to come up with an alternative to using datasets.
Any ideas what the best method would be for server performance to avoid storing the data from the txt file in datasets and updating the dataset back to the mdb file? Should the solution, since we need to avoid using any server memory, just be to run multiple SQL insert commands against it for each record?
The average import, across all 4 tables together, processes about 60,000 records. Thanks!!!!!!