Best database to use with vb.net applications

Ultrawhack

Well-known member
Joined
Jul 5, 2006
Messages
164
Location
Canada
Programming Experience
3-5
Hi,

I'd like everyone's opinions on this please. What would be the best backend database to adopt with a vb.net application - assuming the app is going to be deployed on a per-user basis, 1 user 1 backend - no backend sharing via intranet, etc.

I'd also like to know your opinions on strengths/pitfalls of various db formats.

Thanks !
 
dellman said:
However, getting your strongly typed dataset to exist in the first place without a database, I have never tried this. I assume that its easy, I just never bothered.

You've never used the DataSet designer? the thing in VB that looks like a combination of the table design and relationships window in Access?

Wow.. anyways, yeah.. thats where you make your typed datasets :)
 
I've never used it either.... I'm leery about coupling my data so tightly to the source like that. From what I've seen of other people, if I go to make a change to the underlying tables, it's a b10tch to get the typed DS to correct itself.

Plus with the number of tables I deal with and the relations between them would be unmanageable to do it in VS like that.

-tg
 
OK, heres some tips (cause i've had the same angst as you have now with it)

when you make a change to an underlying table, the change needs reflecting. Sometimes this is as simple as changing the Data Type or Max Length property of the row in the DataTable
If you added or dropped a column, you need to reconfigure the datatable. Right click it and choose configure, Assure that the SQL that embodies the table is correct w.r.t the new column. Click OK
Likely the designer will make a nice scrappy mess of your update and insert, though it does ask "Do you want to overwrite the I/U/D statements too?"
I say no, and edit them manually.
If using stored procedures, you must also visit the parameters colelction of the command and update the parameters as well as updating any underlying stored procedures in the DB if necessary. (DB changes are not light hearted work! :) )


As for maintaining relationships in VS; it's not really for that. If you have one set of tables in your database you might have several datasets, simply because if you have a form that edits one table, you dont want to create a set of 420 tables to do this. My current app has 3 DataSets.. Hub is for all data that changes, Lookup is for lookup data like combobox DIsplay/Value member stuff and Ghost is for tables that dont exist, stored procedures that dont do anything with one table or affect lots of tables - all the junk that cant be tied to one clear table in my Hub
On the relational note, you dont make any more datarelation objects than you need in VS. I've never seen them to be of use during updates or deletes (though i use them for relational stuff only) and the relations in the set might not always run in the same direction as the relationship in the database.. The direction that the relationship runs in is determined by which table is the parent and which is the child. The parent table is the one you search on and load the DataTable with first (suppose it's customer data) then suppose for every customer you loaded, you load his orders into the Orders data table. Then for every Product ID in the Orders, you load the product data..
If you imagine this in your mind you might see you have:

Customer.CustomerID]8-------F[Orders.CustomerID
Product.ProductID]F--------8[Orders.ProductID

8 is the little infinity symbol (or M), F is the key symbol (or 1 if youre used to 1:M notation)

but in your dataset you might arrange this:
Customer.CustomerID]8-------F[Orders.CustomerID
Product.ProductID]8--------F[Orders.ProductID

because you want the orders productID to be used to drive a relation from orders into products. Its weird, but once you ahve to build and do it, it makes sense :)


So; dont be afraid of DataSet changes (though those dreaded "Form could not be loaded.. XXX is not present on the datasource" errors in Forms designer are a NUISANCE :) - need solving in code view on the from1.Designer.VB), they arent that bad :)
And use relationships according to the driving order rather than the true 1:M layout of the database relationships
 
Just goes even further to prove that they would be overkill in my case....
I rarely deal with large lists of data at a single time unless it somehow relates to the parent object being worked on. I also have this irrational need to be incontrol of my data at all times. Maybe it's because the practices of the systems I work on don't lend themselves to simple lists & grids * just adding new rows. There's a lot of BL that goes in hand with it.

-tg
 
I don't bother editing my procedures and stuff when I make a change to my database, and I am not afraid of making those changes either... This is due to the fact that myself and a buddy of mine wrote a database designer app. In essence it writes ALL the CRUD and a huge number of other procedures for me. In fact it sets up my Data Layer, Business Logic Layer, and controller for as many or as few SQL Server tables as I want. I can generate about a years worth of coding for one developer in about 30 seconds ( a database with 100 tables and all the stored procedures and DAL, BLL and controller code). I am in the process, this weekend in fact, of doing a minor re-write on that codebase. I am updating the code to be fully 2005 compliant (just the project files and sln files it generates need some tweaking) and adding cross table joins (it will generate all the stored procedures when it finds parent child relationships as well in the new code) and all the code required to make parent-child CRUD fully ACID by implementing most of the code required to make the system transactional. Of course it won't be smart enough to know between which tables you want transactions and which ones you don't so it won't do that for you, but it will write most of the code for you so that actually implementing the transaction itself is as easy as adding another call to the Business Logic class. I doubt all of these changes will be active this weekend, but probably will be done within a month or so... since I do have a full time job to contend with my time as well. That code generator has saved me to date about 1000+ hours of time and I have only had it for less than a year. The best program I have ever helped write.

**Update** no I have never used the Typed Dataset generator to create a typed dataset NOT based on an existing table. .. Um, I have used it extensively for created datasets that are based on existing databases.... I guess my statment was none too clear.

**CRUD refers to Create Read Update Delete .. I assume you all know this, but just in case someone reads my stuff above and does not know, I figured I would add this.
 
i looked at a few programs that said they did things like that.. stuff like LLBLGen, but i just couldnt get into them - nothing seemed to quite do what I wanted. If you were offering that app for download, i'd love to take a look, but it'd need to be oracle compliant..
 
Dellman, i'm sure you've opened a can of worms by mentioning that you have helped to write such an application. I've used a couple of commercial ones (just on the trial basis) but as cjard has mentioned they are not always up to scratch on certain issues. I'm not going to ask you if you will post your app for us all to take a look at, unless you want to that is....

But i'm interested to know how you overcame some problems that the commercial ones haven't been able too.

1. Object Oriented Data Driven Modeling has been known to cause poor Entity Relationship Models, did you manage to overcome the so called 'Impedance Mismatch'?

2.Did you use XML mapping/XML descriptors?, which can make maintenance difficult. Or did you go some other route?
 
I will post a generated DAL project from start to finish of any 10 table or less database that someone wishes to post. Of course, I would make no statement of requirement on how the code I post gets used, but I would prefer that someone not make a million dollars on it and not share with all of us on the forum :D. Also, note, that the code is undergoing constant upgrade and development and that I would only post code generated from the latest stable build which, to be honest, isn't as nice as what I envision it could be (but at least it works). I was planning on adding a bunch of stuff to it over the weekend and another (money making project) took up most of my time and hence I did not get to it. I only does the primary CRUD basics and anything beyond the basics you have to do yourself, however, lets say you have a table that looks like this:
PhoneCallKey (pk)
FromKey (fk)
TypeOfCall (int) - (enum)
CallContent (text)

and you would normally get a CREATE, READ, UPDATE and DELETE, mine will also do a DeleteByFromKey, DeleteByTypeofCall, ReadByFromKey, ReadByTypeOfCall procedures. This is handy since there are many times when you are working with basic CRUD and you generate these exact type of procedures which are really mini-searches if you will. Please note they are not LIKE searches but = searches so they are not true searches, more like subset queries.

One thing I want to add in the next version, (if possible) is a fully functional typed dataset creator as well but I need to research the method that is used by microsoft to do this and see if I can bust in on their show and kinda re-use the object they get to do this.;-)

Anyway, I will wait for someone to post a db (preferably a SQL Table Create Script, or if by tomorrow night there is none, I will pull one of my old dbs and generate the code on it and post the ZIP for people to preview along with the create script of course.

But i'm interested to know how you overcame some problems that the commercial ones haven't been able too.

1. Object Oriented Data Driven Modeling has been known to cause poor Entity Relationship Models, did you manage to overcome the so called 'Impedance Mismatch'?

2.Did you use XML mapping/XML descriptors?, which can make maintenance difficult. Or did you go some other route?
Hmm.. To be honest, I took the slice of code that I aways wrote for stored procs and stuff and made a code generator, all it does literally is say, ok, you have this construct, paste in the differences and here is your new procedure. The areas that can be different from procedure to procedure are essentially variables. I did not research it before I did it, so unfortuantly I kinda did it without really knowing what was already available etc. I assume from your use of the term Imedance Mismatch that you are referring to the ability of the generated code to "plugin" as it were to existing apps. I would say that the ability on this score is very high. The reason for this is that when you call my object you construct it and the constructor requires your connection string, hence my object is essentially DUMB about where you store the data, it could care less, you supply it with a valid connection string to the database it was scripted from and it will work, as long as the database server is of a compatible nature, and currently I only support SQL Server 2000/2005 from MS. I plan to add support for MySQL in my next update to the software but that may require a total re-write due to the way the first version was coded. As far as mapping is concerned, I actually brute forced this version, and that is because I and a buddy of mine had it coded inside of a week and did not put hardly any design into it before starting.. its going to be re-written to fix this issue and improve maintainability and hopefully gain a two to tenfold peformance enhancement.

Cheers.
 
Last edited:
dellman said:
I assume from your use of the term Imedance Mismatch that you are referring to the ability of the generated code to "plugin" as it were to existing apps.

The term Impedance Mismatch originated in electrical engineering, where the output from one circuit is incompatible with the input of another.. Usually because the range of inputs supported is too limited to too excessively vcaried to be usable. It's become wider spread than just the EE context now, and applies to any situation where some mediative process is applied between two standards that are relatively incompatible.

In terms of databases and OO programming languages this boils down to the notion that OO design goals of encapsulation, private data and modified behaviour of child objects cannot be mapped to the data storage mechanism of a RDMBS very easily. Tables, as a data store, are relatively immutable in structure compared to objects.. You can declare a class person and a table Person, but then when you subclass person, you cannot easily 'subclass' the table to support modified behaviour. All data within a table is generally visible, and establishing relationships between tables, in some ways, breaks the OO design goals that objects should function standalone and not be coupled with other objects.

In some ways, maybee without you realising, your app will go some way to solving this problem, by adding a layer of abstraction between. Typically one of the most followed paths of solving this problem is merely to treat the database as the persistent data storage for the objects. Care should then be duly assigned as to what is the authoritative reference for data in terms of up-to-dateness; there are arguments for the latest recorded data in the client side object to be the authoritative reference, just as there are arguments that the database be the point of reference. As there is usually a connective link between an in-memory object on a client and the database that is persisting its data, you hit the problem that the object isnt storing its own data if the database is to be the authoritative reference (perhaps wise in a multi-user system). Thus, it means the client side should store little data at all, and constantly fetch from the database whenever data is requested. Typically the overheads make this a bad idea, so the design goal of encapsulation is broken.

Judging by your response (and I dont mean this offensively) I dont think you've considered these issues, and in your contexts they may be unimportant; thats absolutely fine, and programming, like math, is about getting the right answer with the least work. I'm sure you would too be the first to admit that a helper app cannot solve all problems automatically, and too point out that any help is welcome; a belief i support too. I'd love to take up the offer of a project generated from an SQLS DB but i dont have one to hand. I'll watch with interest if someone else posts the necessary support material for you though :)

You may find the following article reasonably interesting:

http://en.wikipedia.org/wiki/Impedance_mismatch
http://en.wikipedia.org/wiki/Object-Relational_impedance_mismatch
 
Judging by your response (and I dont mean this offensively) I don't think you've considered these issues, and in your contexts they may be unimportant; thats absolutely fine, and programming, like math, is about getting the right answer with the least work. I'm sure you would too be the first to admit that a helper app cannot solve all problems automatically, and too point out that any help is welcome; a belief i support too.

You are absolutely correct. I did not consider these issues. While I have been programming for years and years and am somewhat knowledgeable in solving problems (sometimes in a way that is not the best way )... I only recently graduated from technical college with my diploma is software design.

In some ways, maybee without you realising, your app will go some way to solving this problem, by adding a layer of abstraction between. Typically one of the most followed paths of solving this problem is merely to treat the database as the persistent data storage for the objects.

The app only creates the persistent storage interface mechanism. If you design your database and app to store objects in whatever form (reference my previous posts on serialization to xml) and the db just stores these serialized objects and the app rebuilds them after requesting their serialized forms from the db, then, my app will help, esspecially for the CRUD and generation of stored procedures. If you plan to allow the DAL object to reconstruct these objects then of course you have to create a dependent DLL that encapsulates your objects and reference that to both your DAL DLL and your app. If done in this form you can then modify the Business logic to reconstruct your objects for you and then your controller is only aware of objects or arrays of objects and really has no clue about the persistence layer underneath.

Care should then be duly assigned as to what is the authoritative reference for data in terms of up-to-dateness; there are arguments for the latest recorded data in the client side object to be the authoritative reference, just as there are arguments that the database be the point of reference.

It was my understanding that (barring special circumstance) if your database implements TimeStamp then the TimeStamp field is the reference for up-to-dateness. This would essentially mean that if the timestamp is different from when your record was read to when it was re-written, the end user is at least given the option to save or not save the data. Of course if you predicate your business logic on making the app contain the latest data no matter what, then the timestamp does not become very usefull unless you have auditing going on and you use the time stamp in your audit (not sure why you would be be honest). If you predicate your database has the most uptodate copy of the data, then you need the timestamp to ensure that old data that has been modified and re-saved does not overwrite newer data and implement this logic in your stored procs.
That is my current take on this, of course, I keep an open mind and I am subject to change it when provided with a convincing argument that conflicts with my views. Also, since I am implementing my first db that uses timestamps right now, my views may change based on that experience...
Comments?
 
Neal said:
Bottom line is that MS Access is "trying" to be phased out by Microsoft. I honestly don't see that happening unless they figure out a way to put SQL Express under the UI of Access and rename the product. However, I invite your opinions on VistaDB v3 as it moves towards production. Right now 2.1 is their latest, but v3 looks like it may be the time for me to further investigate. I'd personally really like to deal with nothing but SQL Server/SQL Express, but people like Microsoft forget people still have dialup connections and shareware vendors cannot have 30-50 MB downloads for their shareware apps!

Indeed, which is why im sure the next version of windows will ship with SSX drivers, like XP ships with Jet drivers.. I though Access could already serve as a front end for MSDE/SSX? I'm sure, after running the upsizing wizard, that access has reconnected to the new database it made (was it an mde file? not sure) and could administer it. ?
 
dellman said:
You are absolutely correct. I did not consider these issues. While I have been programming for years and years and am somewhat knowledgeable in solving problems (sometimes in a way that is not the best way ). I only recently graduated from technical college with my diploma is software design.
...
If you predicate your database has the most uptodate copy of the data, then you need the timestamp to ensure that old data that has been modified and re-saved does not overwrite newer data and implement this logic in your stored procs.
That is my current take on this, of course, I keep an open mind and I am subject to change it when provided with a convincing argument that conflicts with my views. Also, since I am implementing my first db that uses timestamps right now, my views may change based on that experience...
Comments?

I'm not going to present any arguments or try to force a change of mind here; it's not really the focus of the debate. I think vis781's original comment was one of curiousity, of how you might have addressed a particular problem that bugs OO designers when using RDBMS..

Not all OO designers are bothered, some dont even consider it - the database is the persistent store and that is that. The more inquiring minds ask why, simply because the notion of separating object logic (client side app) from object data (server side db) doesnt sit well with them - the two machines that these would respectively be on and the link between them, is too slow to make for a fully usable program in all circumstances. To be 100% correct, there should be no stale caches of data. But if caches didnt exist, the WWW (for example) wouldnt work well at all, so caches have to exist, but they are a bad thing in some circumstances and essential in others.. You can see how one man's meat is another's poison in so many patterns of technology usage and application.

I personally, take less of an issue with it but i'm not writing militarily rigorous software or mission critical apps - i'm just dealing with potentially millions of dollars of other people's money :) To me, I'm reasonably happy with the database as a persistence mechanism and it will do most of what I want. I suspect youre in a similar situation and have written an app to help out your chosen style of working. If you were to take this app and use it as a revenue generator to sell to others, i think thats when you would discover the problems being discussed; we all know that users will use and break things in ways we never even considered (and all hell breaks loose when the users are other coders). Keep it in mind though - even if a piece of software solves your particular problem, a way to make the job varied, fun and interesting is to imagine how it your solve wouldnt apply to someone else.. Use it to learn and explore more philosophical/theoretical problems facing you in OO programming. THis last point is perhaps one of the areas that VB programmers consider least, and especially if they are upgrading from more procedural methods of thinking :)
 
Back
Top