Transition complete. There are now 3 branches: fn, Bobcat, and master (also known as “Cheetah”). I’m hoping that the new code-base in Cheetah will prove to be much faster than the original design. All things considered, I knew I had designed a slow system. It was effective and relatively easy to maintain, and the new system is likely to be slightly more challenging. However, I maxed out what I was going to get out of the old design. The processing code was very bulky with tons of methods dedicated to it. I spoke about its task system in a previous post, which should have been published a long time ago, but now is as good a time as ever to compare it with the new system. The new system (the new engine, that is) has a separate parsing and execution system, reducing the primary execution sector (the methods doing the repetitive operations) to only a handful of methods.
The bulk of the parsing code (with the exception of the lexing) has been entirely replaced by a new system that uses tasks much like the old system used tasks for execution. By handling these tasks in the parsing phase, I am able to generate “opcodes” – instructions for execution – that are much faster than the inline processing I was doing before. If I wanted, I could even allow code execution to skip functions whose bodies were corrupt rather than resulting in a error that crashes the program – Not that you would ever want to do that (nor did I bother implementing such an idea).
All of the processing code of the old system has been removed and replaced by the new execution system. Ironically, since many of the processing duties have been given to the parsing section, the number of methods in total may actually have grown to be ever so slightly larger than the previous list, or so it appears. At the same time, I’ve also changed by ISO for writing out functions, so components of function headers are separated onto multiple lines for enhanced readability.
One small benefit of new parsing system is that I should be able to return to the optional “fn” at the beginning of functions. That isn’t a priority, but at least implementing it should not be a major headache.
Finally, because the parsing system and execution system have been separated, it’s possible to write a better parsing system in the future if I don’t think the one I have is fast enough or safe enough. I could also adjust the system to output a kind of special binary file and read it back into the engine for execution.
One of the not-so-fun things about this new engine is the fact that most of the error messages need to be rewritten. Many can be saved, but since the parsing system has a different technique for interpreting the tokens – and does much in advance – many of the errors are new. On top of that, I’m reporting a number of errors that were not caught by the previous system because of how it was implemented. The most common is the possibility that some tasks haven’t been completed before the stream ends, leaving things waiting in the task-creation queue. This issue would not be caught at all in the previous engine but may have still had profound impacts. (Imagine, for instance, calling a complex function at the end of your file and forgetting to pair the parentheses to call it.)
Overall, I’m quite confident this newer system is superior in a number of ways and should prove to be the final form of the engine of this interpreter. Admittedly, I am speaking too early. It’s possible that this new system may have its own unforeseen speed bottlenecks or – more likely – its own class of nightmare bugs. I’ve made sure to embed my debugging macros into the new methods so I can trace problems, but as this system was thrown together awfully quick, there are a bunch of unfinished details that need to be ironed out.
As commentary: progress was very fast for the first two weeks and then slowed down for the third week as I oddly felt that the integration was a “big task” that needed to be waited upon until everything was perfectly ready. I came up with a to-do list of things to be done before integration, and surprisingly, it wasn’t that much, so I finally put it together. It goes to show that sometimes your intuition needs to be balanced out by the dry facts. Yes, it took awhile to integrate, but it wasn’t as big of a job as I expected. One reason for that was the fact that I did my best to reuse the existing code-base. Some things had to change (for the better), but most of the interpreter was “proven tech” (that worked) that didn’t need alteration for it to work with the new engine. I want to say a good 60% was salvaged (including engine parts), and thus this ought not be considered a “separate” project by any means.
Once debugging is complete and I’ve completed the new foreign function mechanisms, I’ll be re-adding external functionality that I had in the previous iteration of the engine. Then, I can run benchmarks and see just how fast (or slow) this new system really is.
If everything works out good, I will finally give myself permission to post this code online. Here’s hoping it all works.