One of the limitations with creating an interpreted language is your ability to create an interpreter that can actually parse your language. What you can implement determines what semantics you can have.
While technically, it is possible to create a parser for just about anything, we prefer to limit ourselves to semantics we can process in a reasonably short amount of time. In my experience, that means my language specification develops alongside my interpreter implementation. There’s nothing inherently wrong about this approach, and it often forces you to make careful considerations that may, in fact, save you some keyboard strokes in the long run. Say you have great idea S to solve problem P. Solution S may work perfectly for solving problem P, but when you start implementing it in your interpreter, you realize it doesn’t really mesh well with anything else, and there may be a more generic solution that solves more problems with fewer keystrokes, so long as you can live with some minor other extra details (or extra coding) to solve problem P. This is exactly what happened to me when creating pointers. I had devised a theoretical system that would eliminate pointer errors, but it only worked in certain cases, required a complex control structure, and could be easily broken by realistic cases. My further contemplation on the matter led to a very simple solution, but one that might have taken longer to discover had I not also considered how I was going to implement my original idea.
Being open to change can hurt the language. You don’t start writing a Python interpreter and change it midway through to a new language just because something was more difficult to implement than you had originally planned. Python has a predefined set of rules for you to follow. Violating those rules is like teaching your child the parts of two foreign languages and telling him that he now knows a single complete language. However, if you create your own language, there may be a tendency to want to deviate away from the difficult work in order to make implementation easier. It’s one thing to change the language if it is unreasonably difficult (if not impossible) to implement. It’s another thing to do it for convenience.
To ensure your language is implemented as it should be, lay down the key rules and stick with them. Don’t compromise. Certain rules should never change. These rules are the ones you will probably use the most. For example, if your function execution bodies are created by enclosing code in curly brackets (as is the case with many programming languages), then you shouldn’t change the rules to merely requiring a function beginning and an “end” keyword (or some such keyword). If that’s what you are shopping for in a language, I can almost guarantee you will find it.
Your programming approach mentality affects your implementation of the interpreter, and consequently, what sort of language you will make. Upon trying to implement a parentheses-controlled language (like LISP and Clojure) once (“small” side project), I employed the same method as I would have done for a number of other languages with a different expectation on the syntax. I’m inclined to think I could probably play with certain interpreters for different languages and mess around with the syntax while still making the interpreter believe all the code was acceptable. This depends on the implementation and strictness of the interpreter, of course, but that doesn’t mean even the most stubborn interpreters don’t allow some leway here and there. I haven’t checked. (Interestingly enough, I read an article recently about how to make messy non-traditional switch statements in C. Thanks HN.)
In conclusion, the design of the interpreter (or compiler) and the design of the language mutually influence each other. This affects subsequent implementations of the language, but since most programmers tend to go with the official release, you can be sure that a huge portion of failures will be your fault (even if not legally). What a nice thought. Still want to create a language? I do.