There’s a bit of disagreement over what is an “interpreted language” and what is a “compiled language”. Technically, you would think that any language that can be interpreted by a virtual machine can also be compiled even if with great difficulty. I can’t speak for every language out there, and I haven’t gone over all the considerations to come to any sort of definitive conclusion about the matter, but I am inclined to think that it is possible to invent a programming language that cannot be run as native code.
That’s not to say it couldn’t be “compiled”, but when most programmers think of compiled languages, don’t they think of a software program that runs without a virtual machine? If you must make a virtual machine – even if it be in Assembly – then the language is, technically speaking, “interpreted”. All computer activity is interpreted by humans, but let’s backup a minute and remember where these terms (“interpreted language” and “compiled language”) came from. As far as I can tell, programmers came up with the term “interpreted language” to describe languages for which a virtual machine was created to run them. That is, the very first way to run a program designed for that language was on a virtual machine. Consequently, most languages are interpreted languages. The most common languages that aren’t are C, C++, and Rust, and everyone expects you to compile them. Python, Ruby, and PHP are all interpreted because they were first implemented for virtual machines. Java is a bizarre because it is both compiled and interpreted by its first virtual machine, but it would nevertheless fall under the category of “interpreted language” according to the definition I’ve just given.
All that said, I don’t believe the terms “interpreted language” and “compiled language” are foolish. I find it rather arrogant when people say those who use such terms don’t know what they’re talking about. Yes, it’s a controversial topic, but there are plenty of terms in the programming world that under go debate and change. “Free software”, for instance, was lumped in with the term “freeware” for years, and it took the FSF some time to get the general populace and search engines like Google to recognize “free software” as being “restriction-free” as opposed to “freeware”, which now only means “cost free”. The term “malware” is also subjective. Calling something “malware” simply because it does something people don’t want technically includes everything from buggy software to “rm” (the Unix or the GNU coreutils file removal program). While “malware” is software designed with malicious intent, code itself is meaningless, and even code originally intended for malicious purposes can be used for good purpose (albeit maybe taken in bits and pieces, depending on what it does). Does that mean we should stop using the term “malware”? I don’t think so. There’s a general understanding of it, and someone may come up with an effective definition for it at some point. (I suppose you could argue I just gave one, but I don’t think it’s a good one, no pun intended.)
With regards to “interpreted” vs “compiled”, the language I have designed is obviously going to be interpreted – according to the definition I’ve given – because I’m building a virtual machine. That doesn’t mean it can’t be compiled, but I don’t really see how this language would be suitable for compiling anyways, whether merely reducing it to byte code (which wouldn’t do much to save space when using the virtual machine) or trying to make it run as native code. I’m inclined to think there are some paradigms that work best when the language is run in a virtual machine.