How to deal with autoincrement problem...

wakh

Well-known member
Joined
Oct 6, 2008
Messages
61
Programming Experience
3-5
Hi,

The problem here is very simple but I have not been able to find a solution for it. You guys must be aware of how the business documents like Receipt Vouchers, Invoices, Payment Vouchers etc work, all of them got voucher no# to identify them. Now the problem here is that assume we are using the autoincrement feature of SQL Server to increase the value of voucher no# by 1 everytime a new record is inserted.

And in the VB application a form is designed to enter all the required data. Once the user opens the form (say for example create_voucher_form) the next available voucher no# is displayed on the form. Now lets assume another user opens the same application on another computer and opens the same create_voucher_form, which will also have the same voucher no# as the one displayed in the create_voucher_form in the first user's application because the first user have not saved anything yet in that record.

This is where the issue arises. Now lets say the first user creates a voucher and saves it. The user on the other computer also creates a voucher and saves it, which will overwrite the previous users record, and in doing so the first user's update will be lost.

I hope the problem is clearly explained. What can be a possible solution for such a scenario?

Regards,
wakh
 
No doubt C# is nice too. But the power of C++ is what I love, and the best thing no framework required. Standalone executables. C# is a great language too with a very bright future. :)
 
Needs windows, (and maybe msvcrtXX.dll) doesn't it?

It's all just layers of abstraction dear boy..

Well, that's just platform API. C++ can compile stand alone apps on a Mac or on Linux but they use a different API.

.Net is more like Java than anything else, save that .Net uses the terminology of "Framework" where Jave uses "Virtual Machine" tomayeto, tomahto

I will say the Framework is nice, helpful, and quick...except when it doesn't do "exactly" what I the programmer expect or desire, and then I have to spend massive amounts of time working around its limitations, however, i will admit that is no different than any other language i've worked in.
 
Well, that's just platform API.
Which is.. a framework ;)

C++ can compile stand alone apps on a Mac or on Linux but they use a different API.
Yes yes, so can C#.. Of course it needs a Mac or Linux framework..

Are you seeing where i'm going with this?

.Net is more like Java than anything else, save that .Net uses the terminology of "Framework" where Jave uses "Virtual Machine" tomayeto, tomahto
The ultimate goal: to get electrons to move around. Everything above that is layers of abstraction..

and then I have to spend massive amounts of time working around its limitations
You'd necessarily have the same complaints in any language, even assembler. Especially assembler. All youre doing is building an abstraction model that fits with the way you think.. There are only two options; do as the authors want, or write something that allows you to do as you want. You currently still enjoy the former enough to engage it, that's all! :)
 
Ah yes, that's a little more of a semantic argument then. What I call framework is slightly different that API. That C# runs on a mac or linux is surprising, but not unexpected, but I don't have much dealings with C#...it's like C/C++ but reduced and require the .Net framework.

ANSI Languages (C, Pascal, basic, fortran, asm) all work within the guidelines of system/os/app, where the compiler (and subsequent libraries) are coded/designed for the particular api of the system.

The layers of abstraction, or as we used to say the "Higher Level Languages" are only the distance from the processor, and currently the .Net languages are the Highest Level Languages still considered under platform development. Java would be the highest level language to date that I know of (not counting scripting languages like LUA) since you need a virtual machine to run it so it NEVER touches the processor or the hardware directly.

.Net still needs the windows architecture (unless the .net framework has been reworked to exist on other platforms other than windows, which I was not aware of if it has) in addition to the .Net Framework.
I see it as simply: Hardware -> OS -> API -> Framework -> Virtual Machine
and currently .net is a layer above the API, where Pascal (currently Delphi) and C/C++ can work directly on the API.

And yes, i have complaints about every language, because it's always at that moment when you say, "I'd like it if my Object Did This..." and then you find out the code that handles that is in Private method of the parent class and you're screwed. Personally, I make all my object 99.8% Overridable Protected. Because I NEVER know when I'll need to change the functionality of something, and I hate being limited by even my own designs that are a year or two old and can be done better now.

I'd really love is an Idea mapper, that I could take one base Logic Language, whatever the syntax doesn't matter, and have that convert down into the other languages, utilizing whatever api/framework or whatnot is necessary. I wouldn't be surprised if other programmers felt the same, cause as our job is creation we are often bogged down by the specifics of whatever language is current. My personal favorite is Pascal but that's mostly nostalgia because it was my real first (other than apple basic on a IIc but i don't count that *snicker*)

On the up side, the RAD style creation has made life so much easier, and I quite believe the best invention ever was Code Completion, since I learn fastest by doing, and code completion provides a vast repertoire of Methods/Properties that I don't intrinsically know, and suddenly i'm seeing a function called "IWillDoExactlyWhatYouWant" and everything falls into place.

Someday I may jump into C# a little more, but my C knowledge is completely limited to Game Mod Development, so I'm crap at originating, *chuckle*

-The worst thing about programming is waiting for that infinite gap between completion and production to narrow. (*twiddles thumbs*)

:p
 
Needs windows, (and maybe msvcrtXX.dll) doesn't it?

That's just a small runtime file which is required. Microsoft Windows by default ships with runtime files of different versions of VC++. Ofcourse its just the layers of abstraction.

About the .NET being like Java, it surely is, infact thats the main aim behind its development. The framework for other operating systems for C# is indeed there but its not official yet, even though its in to-do list of microsoft and is being worked on. ;)

Its true that .NET framework is the higher layer of abstraction than API. I guess the higher we move on in the layers of abstraction the more limitations will be imposed. :)
 
Java would be the highest level language to date that I know of (not counting scripting languages like LUA) since you need a virtual machine to run it so it NEVER touches the processor or the hardware directly.

When youre finished with that bong, can I have a toke? Smells good!

OK, clear up:

Java needs a virtual machine to run so it never touches a processor? Er.. So what runs the Java code I write if it's not the CPU?

Come on, dont be so dumb. Every bit of software that runs on a computer ends up as electrons rushing around after passing through every layer of abstraction we have created.

Framework, API blah blah.. It's all just terms of reference for what level of abstraction is under discussion. Human labels. I'm asking you to skip past all that and think about how programs come to run.
The pair of you are up and down making out that .NET programs are somehow different to C++ programs and one or the other does/doesnt end up as a bunch of op codes flying through the CPU.



I see it as simply: Hardware -> OS -> API -> Framework -> Virtual Machine
and currently .net is a layer above the API, where Pascal (currently Delphi) and C/C++ can work directly on the API.
So what? I'm talking about what happens in the CPU.. I'm trying to make the point that no mattter how many layers of compilation you stick on top of something, there is no such thing as a high-level language being a "stand-alone" application, because it cannot "stand alone" - It requires the support of the layers of abstraction underneath it. Take those away, and you cannot compile it!

If it is really insisted that a C++ compiled exe is a "stand alone" app where a Java or .NET fw is not, then you should take a look at apps that compile Java or .NET apps right the way down to the same level of machine code that a C++ app becomes when compiled. It's conceptually very simple. All code has to be compiled to something the CPU can work with. A .net or java compiler makes standardized bytecode that the jvm/frameworkvm (which you can think of as another compiler) will turn into (eventually) machine code. The JVM *HAS* to be a compiler, it *HAS* to get the bytecode into the CPU otherwise it will not run!

Hopefully you can see what I'm getting at here.. Suppose you had all the windows source code, and the BIOS source, and your source, and the C++ "API" sources and every other bit of source in between, then you can execute one massive compilation that will take every bit of source that your app every touches in any way, and generate one long stream of bytes suitable for loading into a CPU. That, and only that, is a "stand alone" application.

I'm trying to get you to stop drawing a distinction between what you would call "interpreted languages" and what you would call "compiled languages" - they are all compiled languages, and they all compile to some halfway house that relies on something else to run. Your Windows C++ app is not some magical panacea of shining executable brilliance; it too relies on something else being there to run. To this end, it is no different to any other language.

I hate being limited by even my own designs that are a year or two old and can be done better now.
Overriding it may not be the answer. There are good reasons why Microsoft close off big sections of their APIs/frameworks to overriding; it was not intended for you to supplant the functionality because there are things you don't know about the rest of it. If you have the source of X and you decide that something can be done better, you would usually rewrite the faulty sections, not override them..


I'd really love is an Idea mapper
Mmm, but don't you see that youre just zipping to the highest level of abstraction youre currently capable of imagining (your own thoughts) and then having a series of translations convert your mind wanderings into code that a CPU can run. There is no need to have a "brain compiler" that takes your thoughts and churns out VB.NET (because your brain is already doing that through your fingers, no?) 1) because there's no point stopping there.. you might as well carry on to compile it for the CPU and 2) it's a complex task that currently, only a brain is sophisticated enough to do

suddenly i'm seeing a function called "IWillDoExactlyWhatYouWant" and everything falls into place.

Consider though the myriad of situations where you want something that will do exactly what you want would lead to an infinitely large library of function calls:
ReverseAStringAndTakeTHeLast3Letters(string in)
ReverseAStringAndTakeTHeLast4Letters(string in)
ReverseAStringAndTakeTHeLast5LettersAndAnyNumbers(string in)

You can't do it! THis is why some basic functionality has to be provided that youi can build on, because no one man/company can know everything you want to build, otherwise they'd build it and there would be nothing for you to do. Further with the increasing size of the provided functionality, you'd get lost. The .NET framework is already too large to memorise, hence Intellisense help.. You still need Google to help you look for what method call you need to use to achieve your goal etc. Increasing the provisions in the framework for "everything" so that you wouldnt have to do "anything" would necessarily make it 100% useless to everyone


Someday I may jump into C# a little more
You'll be disappointed; it's exactly like VB.NET - same framework, different look, cleaner syntax. Like all .NET flavours (including community provided ones) it represents a marketing idea; MS want to kick Sun in the ass for losing that court case over Java. They do this by stealing programmers. Java is one syntax that runs on many platforms, hence appealing toa lot of people. .NET is many syntaxes that runs on one* platform, hence appealing to many people.
Sun focus on getting the jvm onto multiple platforms, the community attempts to provide other syntaxes that kick out java bytecodes (Jython, Groovy). MS focus on their 4 core syntaxes, and the community makes other syntaxes (COBOL.NET, Ruby.NET) and frameworks for other platforms (Mono).

It's a bit of marketing brilliance, to take all that work they poured into their Java implementation and salvage it in the face of being sued successfully by Sun.. Regenerating it as a serious competitor and stealing Sun's market share of programmers. As for which has a wider appeal, I dont know. I'd say .NET because of the efforts of the community at getting it running on other platforms as well as other syntaxes.


That's just a small runtime file which is required.
So is the .NET framework.. You need to think in more abstract concepts.

Microsoft Windows by default ships with runtime files of different versions of VC++.
And Vista doesnt ship with a copy of the .NET framework installed? Come on, think outside the box!

About the .NET being like Java, it surely is, infact thats the main aim behind its development. The framework for other operating systems for C# is indeed there but its not official yet, even though its in to-do list of microsoft and is being worked on. ;)
Uhuh, and I'm sure they'll get around to buying the Mono guys when they feel it's economically viable from a marketing POV. I understand your points, really.. I used to be a java programmer, I look at the similarity between Java and .NET with great cynicism because I know the commercial history of both languages.

Its true that .NET framework is the higher layer of abstraction than API. I guess the higher we move on in the layers of abstraction the more limitations will be imposed. :)
That is not the goal of abstraction. You do not seem to value the balance between spending weeks writing trivial code to render a line on screen, and spending weeks writing in a high level language actually getting something done. In one line of my .NET code I can connect to and retrieve data from a database. What can you do in 1 line of assembler? Complaining that higher abstraction imposes more limitations is pure crap; it frees you up to write some code that has worth, not write 5000 lines of code to fill a rectangle with a solid colour.
 
I guess not all flavours of C++ require runtime files. Those that ship with the windows by default is for VC++. True, vista ships with the .NET framework installed.

Its true that .NET framework is the higher layer of abstraction than API. I guess the higher we move on in the layers of abstraction the more limitations will be imposed
.

Ofcourse framework is designed to ease the life of developers, and it got alot of benefits. What I meant there is for instance is the low level capabilities, for example device drivers which cannot be written in dotNET. :)
 
Java needs a virtual machine to run so it never touches a processor? Er.. So what runs the Java code I write if it's not the CPU?

Come on, dont be so dumb. Every bit of software that runs on a computer ends up as electrons rushing around after passing through every layer of abstraction we have created.
Java itself is never run directly in the OS of a system. That's why there are very few (and when they are present very "abstracted") functions that directly deal with the hardware, because the OS deals with the hardware. For Most languages, the programmer deals with the OS that deals with the hardware, but java doesn't even do that. Java you write to the Java OS which is an added "layer of abstraction" on top of the OS. I simplified that by saying "Java doesn't touch the CPU", way to take a "abstract" point and miss the metaphor.
Framework, API blah blah.. It's all just terms of reference for what level of abstraction is under discussion. Human labels. I'm asking you to skip past all that and think about how programs come to run.
As opposed to computer labels? We (humans) not only invented the computers but the language to describe them, when in doubt be more meticulously (anal) semantic about terms or otherwise we never know what anyone is talking about.
The pair of you are up and down making out that .NET programs are somehow different to C++ programs and one or the other does/doesnt end up as a bunch of op codes flying through the CPU.
They are different...and they aren't. I was only describing the layers of abstraction being more for .Net than for C++. There are layers of abstraction in all things existent or metaphorical, technological or mechanical, but how many layers of abstraction are there. For .net I count 4-5, for C++ I count 3-4. That's more layers. That is all we are pointing out

So what? I'm talking about what happens in the CPU.. I'm trying to make the point that no mattter how many layers of compilation you stick on top of something, there is no such thing as a high-level language being a "stand-alone" application, because it cannot "stand alone" - It requires the support of the layers of abstraction underneath it. Take those away, and you cannot compile it!
Ah, yes everything is dependent on other things yes. But in the software field the term "Stand Alone" means that it only needs the operating system to run. the .Net environment cannot run on just Windows. It needs Windows + .Net Framework. That is the definition of a "Stand Alone" application. When talking about layers of abstraction, we still must come to a semblance of an agreement when referring to a Common Basis of Definition.

I'm trying to get you to stop drawing a distinction between what you would call "interpreted languages" and what you would call "compiled languages" - they are all compiled languages
Uh, no, that will never happen, because it isn't true. .Net is a compiled language, true, as well as many others. However, there are quite a few lanuages that are not compiled at all. HTML, ASP, JavaScript, VBScript, Perl, PHP, XML. Those are languages, many of which do execute a function and are not just visual layouts, but are never compiled.
Your Windows C++ app is not some magical panacea of shining executable brilliance; it too relies on something else being there to run. To this end, it is no different to any other language.
We never said it was or didn't require any thing else. Yes if you want to got to these levels of vagueness and abstraction then it might be first best if you said: "You first need a computer to run a program, and all programs need a computer to run them."

it was not intended for you to supplant the functionality because there are things you don't know about the rest of it. If you have the source of X and you decide that something can be done better, you would usually rewrite the faulty sections, not override them..
Actually, depends on the section. A perfect example is like property accessors. Generic Collection Default Item() property. Well right there the default Item property returns the Item for index. Unlike the exception trapping happy folk of today I disagree with exception trapping externally (personal preference). So i pretty much always override the Item() property to check and see first if the Index is in range and if not, return Nothing or the default value for the collection type. But wait...i can't override the Item() property I have to "shadow" it. It's those situations i'm talking about, and in understanding layers of abstraction, the nature of Shadowing versus overriding a method is much different in memory and shadowing is not preferable. And if vb treats them the same when compiled, why the need for two keywords?

Mmm, but don't you see that youre just zipping to the highest level of abstraction youre currently capable of imagining (your own thoughts) and then having a series of translations convert your mind wanderings into code that a CPU can run.
Actually, i've already gone beyond the layers of abstraction you are referencing and found it utterly pointless to discuss them. LIke dealing with the existential nature of life itself, there comes a point when there is nothing left to postulate. Therefore it is more interesting to come back in a little bit and theorize, discuss, and otherwise learn from other people's interpretations of the same concepts. (I enjoy the disagreements, really, it's the only way to be exposed to different points of view :))

So is the .NET framework.. You need to think in more abstract concepts.
Why? You know, every single time I have ever tried to take a programming course in "College" they have tried to teach me programming logic all over again. I have taken Programming Logic in the language of X, 12 times for each language I've tried to take a course in. I'm already as abstract as you can get. Every language has Statements, Conditions, and Loops. That's the abstract fundamentals of everything. Conditions can be broken into a simple Condition-Statement form, or an "Either Or" condition-statement-or-statement. The Classic If-Else, or If-ElseIf situation. Some expand the condition into a multi-condition, as evident by the Select, Switch, or Case statement depending which language you are dealing with. All language can only have 3 kinds of loops. there are only 3 no matter how you slice it. Pre-Test, Post-Test, and Static (counted). That's it. Everything else flows from that most abstract layer. You do realize there was a time when there were no such things as Objects? When there weren't any "Procedures" "Functions" or "Methods", and that to write code I (and others like me) had to literally, jump about the code to different line numbers for EVERYTHING. Those too are just other layers of abstraction that basically make every language Identical. There is no diference to me between Pascal, .Net, C++, Lua, HTML, Java, or Programming for a TI-81 calculator. They are all the same. Once past that layer of abstraction it is more enjoyable to discuss the finer nuances of each language's Syntax, Dependencies and Performance. Some tasks i would say that C++ compiles better for, where other's it's pretty cumbersome to use. Other situations I would choose VB over my preferred Pascal because of simplicity, or speed, or just because the IDE is free. :p

That is not the goal of abstraction. You do not seem to value the balance between spending weeks writing trivial code to render a line on screen, and spending weeks writing in a high level language actually getting something done.
Actually, I value it greatly because that's my paycheck, and subsequently my rent and food and everything in order to live. That being said, i love both. I used to write lines of machine code for the 80486 Intel instruction set,one that was 11 bytes long and using put pixel, scrolling top to bottom, then left to right, drawing vertical lines of cycled pallette colors (and this was with VESA Bank switching back before protected mode) a 640x480x256 video screen, and all you'd see as the lines of color shifted from right to left across the screen, is a little flicker in the bottom right hand corner. By having written things like that, and having to go through an Ansi Pascal course, which yes there are no Strings. no X= "hello world" you have to manipulate EVERYTHING with characters yourself. I had to write the StrComp, StrCat, StrPos, StrIns, StrDel all myself on arrays of characters. But that gave me a vaster appreciation of how these layers of abstraction are created and instilled over the years.

Complaining that higher abstraction imposes more limitations is pure crap; it frees you up to write some code that has worth, not write 5000 lines of code to fill a rectangle with a solid colour.
I did not say higher abstraction's limitation is pure crap, I basically was being a critic, i believe we all know what those are, in that some of the manners in which they imposed those layers of abstraction are more cumbersome because I have to think like they want me too, and frankly, did anyone tell Michelangelo how to paint the ceiling? Programming is far more art than science. IT, hardware, networking, that's a bit more science because it is finite. Sometimes hardware breaks. But software is infinitely malleable within the "framework" of the system, which means the three of us could be given a problem, and I have no doubt we would all solve the problem, but how we solve it is our art.

In other circumstances, I can say i have already written these layers of abstraction, for example the HashTable in VB. In Delphi 4.0 there was no hash table function. There was a List, but it was slow and limited, so I had to write my own. I made mine COM accessible, Thread Safe, and for the Platform dependent version, it utilized Virtual Memory to Allocate the entire hash table. Thus From one machine to the next the only thing I need to update in the table is the Base pointer and all other elements are relative to it. I've worked in the trenches of memory management because I had to, and then here I am years later working in VB and I see they stole my idea...(*cheezygrin*)...no, they didn't they just realized that layer of abstraction was necessary, and then when I come to use it...it has none of the really cool functionality I built into My Hash Table , and I couldn't descend from the class in order to create my hash table because of the limitations they imposed. So I can either use theirs with the imposed limitations, or I can write from scratch. You can imagine that I don't have time to rewrite that entire library of collections and hash tables and such from scratch in a new language, so I just use theirs, but that doesn't make it any less frustrating at times.

I think in the end we are all debating personal preferences and more or less missing the lexicon and perspective of everyone else. But I will say, I still love writing the low level, gritty code, because those are some fun problems to solve, but I do quite prefer hacking vb as much as I can, because it's speed and reliability are preferable for the quick in house business apps I make daily, and when I need an expanded object to do things for me, well its written so that I can use it again and save time on the next project. You can't do that in ASM, very true, which is why I play with asm for fun not for work. :)
 
Ah, yes everything is dependent on other things yes. But in the software field the term "Stand Alone" means that it only needs the operating system to run. the .Net environment cannot run on just Windows. It needs Windows + .Net Framework. That is the definition of a "Stand Alone" application.
Depends on how it depends on other things. Most C++ native compiled applications install with a lots of library files etc, this is no different from .Net, the only difference here would be the .Net libraries are shared so if previous app installed them then this app wouldn't need to (and don't). Also .Net apps compile to native code by the JIT when run, doing this they don't differ when executing from the native compiled C++ app. But the real difference is .Net apps depends on the .Net runtime to be present and ready to JIT, ready to manage memory for all running app domains etc. Like previous classic VB had a runtime dependency (but no common class library), and Java has its JRE/JVM runtime (and class libraries). Native C++ app don't have this runtime dependency apart from the OS naturally, and are therefore "standalone". .Net runtime is part of the Framework, but is separate from the class libraries a .Net app also require.
 
Java itself is never run directly in the OS of a system.
You may need to update your knowledge about how Java programs work. I'm asking you to grasp the notion that Java is part compiled at design time, and then "fully compiled" (by what your definition of compilation is) at runtime (and at that point your java program is using features of the OS the JVM was created for). What I'm trying to get you to understand about what i'm saying is my definition of compilation.. Every single layer of abstractiona program goes through in its translation from a sequence of characters meaningful to a human, to a sequence of bytes meaningful to the CPU, is compilation.

Let's look at this another way. I'll give you a bunch of CPU opcodes, a block of bytes that the CPU could execute/is executing, and I want you to tell me whether it's a Java program or a C++ program. It cannot be done, because it's been repeatedly compiled all the way down to the point where it finishes up just a bunch of opcodes.

Now suppose those codes were to render a 200x200 pixel square, the top left of a window, on screen.. You could look at the rendering and say "well.. it looks like a Swing GUI, so it's probably a Java program.." But you couldnt be sure!

Uh, no, that will never happen, because it isn't true. .Net is a compiled language, true, as well as many others. However, there are quite a few lanuages that are not compiled at all. HTML, ASP, JavaScript, VBScript, Perl, PHP, XML. Those are languages, many of which do execute a function and are not just visual layouts, but are never compiled.
Rubbish! Scripting languages, interpretation of HTML tags causing something to appear on screen that it otherwise would not (in that form) - they are all ways of directing the computer to yes or no execute something. Given that you cannot possibly be asserting that the CPU works on HTML directly, I cannot see how you are not getting on with the notion that everything you put into a computer that causes some sort of change in what you see, hear or interact with, ultimately is a result of the CPU processing some opcodes, and those opcodes came to be there as a result of the human input.

For HTML, IE is the start of the compilation process. IE is a compiler. It takes human understandable text and is the start of the process whereby eventually a sequence of opcodes will be whizzing through the CPU/electrons banging around. Note that this descriptor can just as easily be applied to a C++ compiler.

Please, step past the "a compiler is a program that takes a bunch of text source code written by a person and turns it into a exe that the operating system runs" definition; you say later in your post that youre capable of transcending such specifics and thinking in the abstract.. I'm proposing that whole leap from human thought to electron movement, and not drawing a distinction between .NET, Java, ASP, vbscript, C++.. because there really isn't one for the purposes of this discussion.
 
Let's look at this another way. I'll give you a bunch of CPU opcodes, a block of bytes that the CPU could execute/is executing, and I want you to tell me whether it's a Java program or a C++ program. It cannot be done, because it's been repeatedly compiled all the way down to the point where it finishes up just a bunch of opcodes.
I quite agree, and never did disagree with that concept. Nor did I need it explained. Like I tried to point out before, it is a semantic argument, primarily in that I was not, nor do I believe wakh was ever discussing the fundamentals of how programming works. Why? Because we already know how it works, and there is no need to re-hash that which is explained in programming 101. More importantly, the differences, nuance, and quirks of every given language - that's what is really interesting. All Programming languages gets reduced to ByteCode, well, frankly, Who Cares? How the language is written, how it is distributed, and how specific compiled optimizations affect the end result, now that is a discussion. It's simply like coming up and saying that a Ford can sometimes drive faster than a Chevy, and you're tell me how the internal combustion engine works.

cjard said:
Rubbish! Scripting languages, interpretation of HTML tags causing something to appear on screen that it otherwise would not (in that form) - they are all ways of directing the computer to yes or no execute something. Given that you cannot possibly be asserting that the CPU works on HTML directly, I cannot see how you are not getting on with the notion that everything you put into a computer that causes some sort of change in what you see, hear or interact with, ultimately is a result of the CPU processing some opcodes, and those opcodes came to be there as a result of the human input.
I wasn't. You were. Your original quote was:
cjard said:
I'm trying to get you to stop drawing a distinction between what you would call "interpreted languages" and what you would call "compiled languages" - they are all compiled languages
This is why i keep point out semantics, because i frantically disagree with that statement. They are not all compiled languages. Semantics are a fun an intriguing thing, and hey I quite enjoy the semantic disagreements from time to time, but if you put ten people in the room and they all define the phrase "Serial Port" to mean something completely different, they may not all be wrong, but they'll never communicate.
To continue though:
cjard said:
For HTML, IE is the start of the compilation process. IE is a compiler. It takes human understandable text and is the start of the process whereby eventually a sequence of opcodes will be whizzing through the CPU/electrons banging around. Note that this descriptor can just as easily be applied to a C++ compiler.
For this I would have to completely disagree, but that is again a "semantic" disagreement. I would define a compiler as an engine that processes a programming lanugage into readily identifyable machine codes and addresses. Having written quite a few "parsers" in my day, IE does not "COMPILE" html, it "INTERPRETS" it. The difference you may wonder? Compiler's produce static final linkable files, (because you may have forgotten, to turn something from C++ to EXE is a two stage process, not a one stage compile) HTML Is not outputted to a static form, it is all at run-time which means it isn't compiled, it is interpreted. Now, if we drop are personal preferences towards one meaning or another, this is where I draw the understanding: Look words up. So I did, just to be sure we understand the difference (semantically) between a Compiled language and an Interpreted Language.
http://en.wikipedia.org/wiki/Compiled_language said:
A compiled language is a programming language whose implementations are typically compilers (translators which generate machine code from source code), and not interpreters (step-by-step executors of source code, where no translation takes place).
I would say the key word in that definition is "translation", because HTML is not "translated" into opcodes, it is interpreted by another program that devises in it's own language the opcodes. As your postulation earlier said, a 200x200 image on the screen and it looks like swing. Okay, so, I write an XML file that contines coordinate information and string text information encapsulated in proper XML tags and elements. And then I write a program in VB that takes that XML file and draws the form on the screen using the information in the XML file. By your description you would call that XML file a Compiled language. It is not. The XML is not translated into a machine code. My VB code does all the drawing, all the processing, the XML file is only a DATA file that is Interpreted.
http://en.wikipedia.org/wiki/Interpreted_language said:
In computer programming an interpreted language is a programming language whose implementation often takes the form of an interpreter. Theoretically, any language may be compiled or interpreted, so this designation is applied purely because of common implementation practice and not some underlying property of a language.
While Java is translated to a form that is intended to be interpreted, just-in-time compilation is often used to generate machine code.
Again this term of translation, however interestingly enough, we find that Java here is in an in between stage of the interpreted and compiled, because it is translated, but instead of being translated completely, it's reduced partially awaiting the final stage into execution. Like par-baked bread, it's only brought far enough, but not completely, yet, based on the stipulation of translation it must be conceded that Java is a compiled language but compiled specifically for interpretation by a virtual machine.
The American Heritage® Science Dictionary Copyright © 2002 by Houghton Mifflin Company. said:
A compiled language is a language in which the set of instructions (or code) written by the programmer is converted into machine language by special software called a compiler prior to being executed. C++ and SmallTalk are examples of compiled languages. ◇ An interpreted language is a language in which the set of instructions (or code) written by the programmer is converted into machine language by special software called a compiler prior to being executed. Most scripting and macro languages are interpreted languages.
And then here the semantics are pretty much the same, depicting the HTML and the C++ to be practically identical, which is again the fun of definition, semantics, denotation and connotation. Which is right/which is wrong? Neither and Both.
cjard said:
Please, step past the "a compiler is a program that takes a bunch of text source code written by a person and turns it into a exe that the operating system runs"
I never had that definition of compiler, and it is, indeed, a fallacy, since a compiler translates source code into readily understandable machine code for a specific processor set. That code cannot run or do anything. It just is. After that, one needs to run a Linker which attaches all underlying byte code of all the compiled files into a processable executable that can be run by the kernel.
The Free On-line Dictionary of Computing said:
A program that combines one or more files containing object code from separately compiled program modules into a single file containing loadable or executable code
This process involves resolving references between the modules and fixing the relocation information used by the operating system kernel when loading the file into memory to run it.

cjard said:
you say later in your post that youre capable of transcending such specifics and thinking in the abstract.. I'm proposing that whole leap from human thought to electron movement, and not drawing a distinction between .NET, Java, ASP, vbscript, C++.. because there really isn't one for the purposes of this discussion.
And there it is the finality of the discussion in that after all this you have admitted what you felt the discussion was about, as well as the your semantic understanding of what is truly abstract and not. I was not discussing that, nor did I ever try to lead anyone to believe I was. I was drawing a distinction between languages because there are distinctions between them, just as there are between German, French, Oxford English, and American English.
Abstract is conceptualization of the mind, it is non-concrete and primarily theoretical - postulation, conjecture, and though it was my impression that originally we were discussing the pitfalls of one language over another it has suddenly becomes a discussion on the existential nature of the Central Processing Unit. Frankly, I don't want to think about the philosophy, or transcendentalism of a silicon diode, but that's me, and additionally the nature of that discussion would also not be abstract. Your arguments were re-introducing fundamentals, which are, cold, hard, and very finite. A computer does as a computer does, and if we wanted, we go further into how RISC versus CISC process code and how compilers work for them. But to what end?
I don't disagree with the statement: all programming languages are the same. Because I have argued that point the majority of my life. Nor do I disagree with the statement: the computer (CPU) does not know the difference between C++ and VB.Net. As that is elementary.
But, once we forget about the computer, we realize that we are human, and flawed. Flawed humans that designed the microchip, and flawed humans that created the Intel Processing Instruction Set. And Flawed humans that created the C++ and VB languages, and Flawed humans that created the IDE's and compilers, finally down to the flawed humans in you an me who use these tools and inventions daily.
That, to me, is where the interesting discussions take place, not in the core of the CPU, which is what I felt was the nature of this discussion when it began after the original question had been answered on this thread, and though the layers of abstraction between framework, api, and virtual machines began to enter them they are fundamentally static, no room to maneuver, which is, forgive my ADD, very boring *shrug*, as there is no point in discussing facts.
In the end though, my friends, it has been quite educational, which is the purpose of any discussion. I can say I learned a quite few new things in this topic here, and hope there will be many others in the future. We may not always agree on the semantics, or on what is interesting, but we can always walk away with something positive, and just for minor amusement, perhaps in the future we will relegate such discussions to the "discussion" forum as opposed to the question forum, eh? At least then we can go as far off the deep end as possible.

Cheers!
 
I would say the key word in that definition is "translation", because HTML is not "translated" into opcodes, it is interpreted by another program that devises in it's own language the opcodes. As your postulation earlier said, a 200x200 image on the screen and it looks like swing. Okay, so, I write an XML file that contines coordinate information and string text information encapsulated in proper XML tags and elements. And then I write a program in VB that takes that XML file and draws the form on the screen using the information in the XML file. By your description you would call that XML file a Compiled language. It is not. The XML is not translated into a machine code. My VB code does all the drawing, all the processing, the XML file is only a DATA file that is Interpreted.

Where I'm coming from in this debate is that it *is* translated into opcodes. Your VB program is a compiler of the XML. Without the XML it does nothing, yet with it, using a source code created by a human, it renders an image on screen. Necessarily at some point, your program has been party to a series of compilations that cause a sequence of opcodes to arrive at the CPU that cause a 200x200 pixel square (of a Swing GUI, though it never touched Java) to be rendered on screen.

Given that everything a computer does is, at it's most elementary (and using bytes as a suitable common denominator, to get away from 1s, 0s and electrons) a stream of bytes, I can thus say that you have written a program that arranges for a stream of bytes to be passed through the CPU, just like a Java program does, and a Swing GUI is rendered on screen.

Anything that directs a computer to do anything other than nothing, is applicable to this definition I'm drawing. So far I've kept it within the bounds of what we typically think of as programming, but let's forget the programming aspect and just write a README; even a text file, the sheer fact that it contains 100 letter A, and thus causes the computer to paint 100 letter A on screen, in Notepad's edit wiindow means it is a form of compiled program: given the task of "Make a computer CPU render 100 glyphs on screen that look like the letter A" one way to solve, is to write your "program", and load it into something that will compile it and ensure that the CPU does its job.

While I think you and I understand the basis of this, I sincerely doubt that 95% or more of people understand this, or even care to think about it..

When you say another program interprets the XML file and arranges its own set of opcodes.. er.. yes! That's compilation! XML is thus compiled into a set of opcodes. What youre actually saying the difference is, is where they are stored. Probably your program will load the XML into ram, go through some process of compilation and the result is in ram, the CPU runs over it and the GUI is drawn.. The only difference between this, and your traditionally compiled C++ source -> exe is where the result is eventually stored. Again, these are classically defined boundaries; there really is no conceptual reason why a CPU couldnt process a stream of bytes off a hard disk or ram couldnt be powered on permanently and the CPU spends its entire time running over the instructions list it finds there repeatedly..
 
While I think you and I understand the basis of this, I sincerely doubt that 95% or more of people understand this, or even care to think about it..
Very true...sad, but true.:eek:

When you say another program interprets the XML file and arranges its own set of opcodes.. er.. yes! That's compilation! XML is thus compiled into a set of opcodes. What youre actually saying the difference is, is where they are stored. Probably your program will load the XML into ram, go through some process of compilation and the result is in ram, the CPU runs over it and the GUI is drawn.. The only difference between this, and your traditionally compiled C++ source -> exe is where the result is eventually stored. Again, these are classically defined boundaries; there really is no conceptual reason why a CPU couldnt process a stream of bytes off a hard disk or ram couldnt be powered on permanently and the CPU spends its entire time running over the instructions list it finds there repeatedly..
Quite true, but I would only say that I am not declaring the difference, I'm going off definition. Any HTML book or topic or discussion in contrast to any C++ book, topic, or discussion will define a base delineation between compiled and interpreted languages. I did not make up these definitions I just use them to communicate my thoughts and ideas. Naturally, the XML file can't do anything without the VB program, and vice versa. Equally said, the .c file can't do anything without an editor to allow the programmer to edit the data, nor can it do anything without a compiler to translate it into the CPU readable opcodes. The delineation is most definitely how the data is stored. Thus the VB program that utilizes the XML file is defined as an Interpreter processing an Interpreted Language. Where as the VS IDE is defined as a Compiler processing a Compiled Language. Sure, the metaphor's and similarities are apparent, but the one penultimate difference that if I gave you the XML source file, you could NOT run it without the VB executable that interprets it - e.g. Java does not work without JVM. On the other hand were I to give you the Compiled C++ executable, you COULD run it without need of anything else. (baring the OS it was designed for naturally)
And that is the core of the differences in layers of abstraction. The Interpreted Language always requires an addition layer of abstraction beyond the Compiled Language. Compiled Languages work as designed in the framework/api they were compiled for. The Interpreted language does not work on any framework/api, as it is designed to be Interpreted at run-time by that additional layer of abstraction which was designed to work on an existing framework/api. Thus, I can write 1 Java program, and every JVM will run it, because I'm writing for JVM framework, which is an additional layer on top of whatever platform happens to be running the JVM. Though, not a perfect analogy since Java still is a compiled language but it is compiled for the JVM translation to specific platfroms, where as, on the other hand, HTML is 100% interpreted by applications on every platform. One cannot see what the HTML is designed to do unless they run the HTML source through an Interpreter. That is the fundamental core difference between our definitions. True, what I see on screen could have come from anywhere and doesn't really matter, as I can take HTML source and run it through a program that hard-codes those values into a new EXE and thus I've in effect "compiled" the HTML. As a programmer, however, I believe ti does matter to recognize those differences since we have to make those delineations all the time on which method would be most efficient and profitable (not monetarily) to the purpose of the applications design. Some things it is best not to code them into a compiled form but interpret them on the fly, hence user.config xml settings file, which every VB application in effect "compiles" into changes to the opcode stream that it applies at run-time.
I find most discussions/debates always center around common basis of definition, and yes, one could call the VB Interpreter of the XML a Compiler, in its effective purpose, but there is already a Definition for it's use and purpose, so why redefine it? A bow and arrow are used to hunt and so is a rifle and a bullet, but we don't say the bow is a rifle, or the bullet is an arrow. We use the words that are defined for the purpose. (yes, I am very semantically anal retentive, comes from having an author in the family, *chuckle*) With respect, understanding now the conceptual aim presented in this discussion, regarding the nature of programming, then quite frankly there is no difference on any level of how programming works with regards to the computer, and in the end all things are reduced to 1's and 0's, and having programmed directly in those 1's and 0's (though technically it was inline hexcode) I quite prefer the higher level languages that allow me to not have to make those conversions anymore. I know that each object and method are eventually converted to a byte stream, and ever webpage and picture I download is a byte stream. Even a JPG is, under the conceptual interpretation, "Compiled" into 1's and 0's for the screen to draw. Being in a language forum, though, it seems a bit more for a theoretical programming logic topic than a specific language one. *shrug*
 
Back
Top