Interpreted vs Compiled

There’s a bit of disagreement over what is an “interpreted language” and what is a “compiled language”. Technically, you would think that any language that can be interpreted by a virtual machine can also be compiled even if with great difficulty. I can’t speak for every language out there, and I haven’t gone over all the considerations to come to any sort of definitive conclusion about the matter, but I am inclined to think that it is possible to invent a programming language that cannot be run as native code.

That’s not to say it couldn’t be “compiled”, but when most programmers think of compiled languages, don’t they think of a software program that runs without a virtual machine? If you must make a virtual machine – even if it be in Assembly – then the language is, technically speaking, “interpreted”. All computer activity is interpreted by humans, but let’s backup a minute and remember where these terms (“interpreted language” and “compiled language”) came from. As far as I can tell, programmers came up with the term “interpreted language” to describe languages for which a virtual machine was created to run them. That is, the very first way to run a program designed for that language was on a virtual machine. Consequently, most languages are interpreted languages. The most common languages that aren’t are C, C++, and Rust, and everyone expects you to compile them. Python, Ruby, and PHP are all interpreted because they were first implemented for virtual machines. Java is a bizarre because it is both compiled and interpreted by its first virtual machine, but it would nevertheless fall under the category of “interpreted language” according to the definition I’ve just given.
All that said, I don’t believe the terms “interpreted language” and “compiled language” are foolish. I find it rather arrogant when people say those who use such terms don’t know what they’re talking about. Yes, it’s a controversial topic, but there are plenty of terms in the programming world that under go debate and change. “Free software”, for instance, was lumped in with the term “freeware” for years, and it took the FSF some time to get the general populace and search engines like Google to recognize “free software” as being “restriction-free” as opposed to “freeware”, which now only means “cost free”. The term “malware” is also subjective. Calling something “malware” simply because it does something people don’t want technically includes everything from buggy software to “rm” (the Unix or the GNU coreutils file removal program). While “malware” is software designed with malicious intent, code itself is meaningless, and even code originally intended for malicious purposes can be used for good purpose (albeit maybe taken in bits and pieces, depending on what it does). Does that mean we should stop using the term “malware”? I don’t think so. There’s a general understanding of it, and someone may come up with an effective definition for it at some point. (I suppose you could argue I just gave one, but I don’t think it’s a good one, no pun intended.)
With regards to “interpreted” vs “compiled”, the language I have designed is obviously going to be interpreted – according to the definition I’ve given – because I’m building a virtual machine. That doesn’t mean it can’t be compiled, but I don’t really see how this language would be suitable for compiling anyways, whether merely reducing it to byte code (which wouldn’t do much to save space when using the virtual machine) or trying to make it run as native code. I’m inclined to think there are some paradigms that work best when the language is run in a virtual machine.

Advertisements

6 thoughts on “Interpreted vs Compiled

  1. my understanding of interpreted vs. compiled is simply based on how much of the program is translated before it is executed: if it is the whole program, then it is compiled. if it is a line of the program, it is interpreted. this is an old-fashioned distinction dating back to the 1980s: https://ia601609.us.archive.org/34/items/bits_and_bytes_6-v2/06_-_Computer_Languages.ogv

    what makes it more confusing in my opinion are technologies like “just-in-time” compiling, which make traditionally interpreted languages perform more like compiled ones. are they interpreted, or compiled, or both?

    Like

    1. Thanks for the link! JIT does help blur the lines, which is why I resorted to saying that what mattered was using a VM first. Technically speaking, a language is neither an “interpreted language” nor a “compiled language”. That would be like arguing English spoken by humans is a “human language” as opposed to beep-boop spoken by AI as “robot language”, even if humans could speak beep-boop and robots could speak English. However, saying “human language” (at least from my expectations given past experience with similar subjective topics) would probably make most people think of languages that originated with humans for human-human communication. That said, it wouldn’t be “wrong” to use the term “human language” vs “robot language”.

      Like

      1. well take fig for example– fig is compiled entirely from fig to python. the result is a python program that runs entirely without the fig translator (compiler.)

        then the python program absolutely requires the interpreter. even if you glue the bytecode to it, it still requires the interpreter. is this a meaningful way to distinguish and classify things? in most cases, it probably isnt an issue.

        if it were up to me to develop the terms today, it would be like this: if it requires the “compiler”/vm/interpreter at runtime, then its interpreted; if it doesnt, its compiled.

        but even in the 1980s, a rom chip would make that definition sometimes iffy.

        Like

      2. On that note, a similar thing happens with CoffeScript, which “compiles” straight to Javascript. However, I wouldn’t really call this “compiling”. Yes, it’s “compiling” in the technical sense of the word (that is, assuming you’re compiling a bunch of files), but at least in the meaning in this article and what I feel is a prevalent view is that “compiling” usually means going to byte code, though admittedly the creators of CoffeeScript probably say “compile to Javascript” (I forget their exact words at the moment). In the case of CoffeeScript and fig, I’d probably use the term “converted language”, since fig is more or less an temporal language, sort of like a person’s shorthand writing must be mentally translated to English words before being understood. The shorthand isn’t really a “human language” in the sense people would understand it, even though for technical reasons, it has many of the makings of one – it has characters (even if matched by squiggles), a way it must be read, and a translation of meaning (to some extent) into another language. But yes, I do agree, “compiler”/vm/interpreter would make it an interpreted language. But then again, every language must be interpreted, which makes “interpreted language” sound like a redundancy. Notably, terminology gets thrown around alot in the programming world, as I’m sure you’ve noticed. I’d like to end things like the cloud given how pointless a term that is. Fun talking with you, btw.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s