Its not answered because *typically* we dont use datatables in this way
DataTables as a client-side in-memory repository of data (that may or may not be all, some or none of what is in a database table) are typically used for components - the contents are shown in textboxes, grids and combos. They may be updated by the user and then sent back to the database.
We dont often do operations on each row
I do want to point out at this juncture, that for the purposes of writing out to a file, a datareader is much more suitable than a datatable. Actually, a table adapter uses a datareader to pull data out of the database and put it in the table. In this regard, the datatable is something of a middleman and it eats your memory.. you might not care if the number of records is low, but if you are writing 700 megs of data to a file, then using a datatable would need 700 megs of ram to store all the data as an interim
Using a datareader in this scenario would mean that the data can be dripped from the database into the file for a low memory overhead.
An analogy could be watering your garden, needing 10 gallons of water. If you bought a 10 gallon watering can (datatable) and filled it, youd have a very heavy (memory consuming) thing to carry while you water the garden(write the file)
If you had a hose (datareader -> file writer) that could spray far enough to water everything, then your job of carrying the hose would be a much lighter one
So how do we get a datareader? Well, you can go the long way of setting it up, or you can get a little help from your table adapter which is already set up. Typically, because it is only designed to read and write datatables it doesnt expose the reader it uses to do this. Youll need to expose it yourself by adding a little bit of code into the back end.
Right click the tableadapter in the dataset and choose View Code
It actually puts you into viewing code for the DataTable, which isnt what we want.
We will have to modify the namespace that we are in (all tableadapters go in their own tablespace) and add a partial class
In my project called
Credaro, i have a dataset called
CredaroDS. It is thus the rule that the tableadapters namespace is called
Credaro.
CredaroDSTableAdapters
the following code:
Imports System.Data.OracleClient
Namespace Credaro.CredaroDSTableAdapters
Partial Class [B]CRDO_CREDIT_EVALUATIONTableAdapter[/B]
Public Function GetReader() As OracleDataReader
Me.Adapter.SelectCommand = Me.CommandCollection(0);
Return Me.Adapter.SelectCommand.ExecuteReader(System.Data.CommandBehavior.CloseConnection);
End Function
End Class
End Namespace
This partial class MUST have the same name as your tableadapter (bold) - it is not an extension of the class, more like an addition - the two partials will be combined to make a single suring compilation. Because the other part of the class is generated by the IDE, it is destroyed and regenerated every time a change is made in the visual designer. this is why we dont put our code in with the stuff that will be destroyed.
Now we can (hopefully) get a reader that emits fields with the same signature as the datatable
-
I want to talk, at this point, about stong versus weak typing.
When you put a dataset into your project, and name all the tables, it makes a Strongly Typed dataset. This dataset descends from the generic DataSet, which is weakly typed. If you dont know about inheritance in object oriented languages, go learn it now.
All the datatables within your strongly typed dataset (lets call it STDataSet) are also strongly typed, and all the rows in those tables are strongly typed too
Often I see this code:
For Each ro as DataRow in STDataSet.Tables("STDataTable").Rows
If DirectCast(ro.Item("STDataColumnInt"), Integer) = 0 Then ...
and i'm like "No, no, nooo..."
There is no point setting up a strongly typed set and then using it in weakly typed mode, and having to cast everything every time you want to use it!
We write this:
For Each ro as [B]STDataRow[/B] in [B]STDataSet.STDataTable[/B].Rows
If ro.STDataColumnInt = 0 Then ...
It means intellisense can help us write it, we get the types right, we dont have to remember what every single column is called and there is no ugly, messy, lengthy casting (cause vb's syntax for casting is awful compared to c#) just to remove the "object flavoured wrapping paper" from our "integer" value
So.. Bear it in mind. If you have a typed dataset, always work with it in typed mode as much as possible! If you are ever specifying a table name, or column name as a string, or integer:
MyTypedDataSet.Tables("SpecifyingTableNameAsString")
MyTypedDataSet.Tables(0)
MyTypedDataSet.Tables(0).Columns("SpecifyingColumnNameAsString")
MyTypedDataSet.Tables(0).Columns(0)
Think to yourself, "no - i'll see if I can do it the easier, better, safer, faster, typed way":
MyTypedDataSet.SpecifyingTableNameDirectly
MyTypedDataSet.SpecifyingTableNameDirectly.SpecifyingTableNameDirectlyDATACOLUMN
Exceptions to this are when you really need to use a for loop with an Integer indexer, but try to use a for/each in this case if possible..
-
Lsat point of note, now you know this..
DataReader is never subclassed by the IDE. There is no STDataReader for the STDataTable, so your little hack above has to simply return a generic datareader.. and this HAS to be used like this:
While reader.Read() 'advances the reader to the next line
fileWriter.Write(reader.GetString(2)) 'get the third column as a string
fileWriter.Write(reader.GetInt32(7)) 'get the 8th column as an int
or like this:
While reader.Read() 'advances the reader to the next line
fileWriter.Write(reader("PersonName"))
fileWriter.Write(reader("PersonAge"))
you cannot use the generic datareader like:
While reader.Read() 'advances the reader to the next line
fileWriter.Write(reader.PersonName)
fileWriter.Write(reader.PersonAge)