They may only be downloading half the data, they could be downloading the whole of the data, it depends, they could only be downloading a little data.
So, the 80% is a sway towards normal use being a lot of data downloaded? I would definitely have it local, unless the data is to be updated by each user - you may then feel the management headache of each client merging and distributing updates among 34 other clients to be more of a headache than having it central.
Well basically, i query one table, write a node, the based on that query, query another table to get the child nodes, and i am currently doing this via a datareader (see below). So I am rendering the whole treeview at once.
I'd recommend that you only render sections of treeview as they are expanded, and keep the node grouping relatively small - Windows explorer doesnt read the folder structure of your entire hard disk as soon as you open My Computer, it reads them as the user selects.
Form your queries so that the minimum of data is downloaded to populate the tree (node text and node id should be all that is needed until a leaf is reached)
Thats why i am putting the data into sub catergories (tree nodes) so they dont see all teh data at once. I was thinking that because there is alot of data in a record which isnt relavent to the search, that i have an index number which links to another table with the index number, so once i have searched and found the row i want, i simply get that row from the other table which has the bulk of the information in.
It sounds like youre on the right track, and i dont know how many levels there are in your hierarchy but typically you would have a table with 3 fields:
node_id, parent_node_id, data_record_id
node_id for new nodes is a sequential counter. Any node that has a parent, has that nodes parent ID recorded in the parent ID, and the data record ID is normally null (unless parent nodes have data) until you reach a leaf node
upon expansion of a node, you know the node ID that is being expanded so you:
select * from node_hierarchy where parent_node_id = <the node id>
it's possible to do away with the data_record_id and key the data records straight to the node - it jsut depends if more than one node will have the same data record attached (i.e. two routes to the same place)
Would this be a bad idea?
Not at all.. The only thing is that downloading should be like eating - a little and often, rather than everything at the start. It stops your program getting fat in memory, with useless bits that will not be used
Also, why not just index the whole database or should you only index the fields that you search by.
There comes a point where indexing is either pointless or impossible, indexes must be maintained and they add a level of indirection to accessing data. Sometimes the costs outweigh the gain and the index is useless. There are pages of articles written on database design and indexing and I cant replicate it here. Suffice to say, its a large and complex topic that is best asked in context
To answer your question, ther are times when its a waste to even index a field you search by, but generally if youre repeatedly looking for a small number of results out of a huge number of tables, then those columns youre searching on become candidates for indexing. Be aware that indexes dont help on all kinds of searches or you might have to pull some tricks in the index.
For example if youre searching for strings that always end with certain characters, then a classic index based on the start of the string wont help. Instead you might choose to create an index on the result of reversing the string, and then use a reversal in your search. Not all DBMS will support this feature, of course. If it does not then you might consider storing the data in reverse, for fast access and turning it round when displaying