A Cooperative Job System

This section assumes you are familiar with the difference between Multi Threading and Multi Processing.

Today’s computers have multiple processor cores, this can go into large numbers. With multi processor boards this can be as high as 64 cores on a single board at the time of this writing. It is clear that speed advancement can only be reached in today’s software by utilizing parallel processing. It is equally clear that powerful multi threading and multi processing elements must be part of any new language.

Today’s languages leave this mostly to external libraries, like C++ 11 delegates this to the STL. The only language I know of which has multi threading as part of the language is Ada. Ada does it pretty good but still on a low level which exists so that programmers can build their own multi threading universe on top of these elemental building blocks. They are better than what C++ offers but still relatively primitive.

What I’m proposing instead is a high level system based on our practical experience in a huge server application over the past years. This system has evolved over several stages until it reached today’s form and by doing so, we had to learn the hard way what does and doesn’t work with multithreading. In it’s latest stage it still has room for improvements, but it is already so powerful and practical that it has allowed us to rewrite our server system in a “building block” kind of way where the utilization of any number of processor cores is really easy.

Such a system should be part of the language itself, not an external library so that the compiler has control over all aspects especially when it comes to pointers (a topic for later). Having this all under the direct control of the language would eliminate the need for some OO stretches we had to make in order to implement this system in C++.

First, what does NOT work:

A) Having a HUGE number of Operating System Threads. Each thread uses up a lot of resources, mainly a pre-allocated stack of several MB (2MB in the Window standard setup), so that restricts the number of threads already. Now you can create several 100 Threads with that, but there are two problems: First if you have a server that must handle thousands and thousands of user requests like a game server, you still need more parallel entities than you can create even with the largest memory area and second, switching between these threads is very expensive. You create 100 threads and check the simple Windows performance tool in the Task Manager, you see the red line go up and up, which represents the system time your CPU spends or, in other words, time needed to switch between threads. Not to mention a 2MB stack per thread that is hardly used wastes most your memory for nothing in return. So this is not the answer

B) Creating threads as needed and destroying them when the task is finished. This is even worse, because you don’t know how many might be created at peak times, you may run out of memory plus creating and destroying an OS thread is EXTREMELY expensive. This this does not work at all.

What is the solution then?

Create a number of worker threads in relation with the number of processor cores and keep them alive. Then assign code to be executed to these threads. When one section of code finishes, another one is assigned (with the respective data).

In our lingo, these code/data objects are called a “Job”. A Job is basically a class derived from a baseclass that provides the functionality needed to carry out operations in parallel.

Here is where it gets interesting: Each code will eventually have to perform blocking operations, like sending a request to another thread or process and waiting for an answer. Doing this the naive way by blocking the whole thread, would lead to the problems described under A and B above. So the solution is, that the thread system (in the language) must “send data as a message” and then the code that called this “send” functionality must return control to the threading system. It must then be prepared to be called again in another state with the answer to it’s request.

This is a “cooperative Job system” because it’s the user code that relinquishes control when it realizes that it can not proceed without receiving more data from other sources. It is then prepared to react to this answer and proceed processing at another location in the code. There is no other way of doing this. Any non cooperative way would require the OS to move switch stack to the one of another function that should resume. With stacks being one cohesive area of data, this is basically not possible.

The simplest way to do this (and the way we do it) is by having an overloaded “int Run( int State)” function. The code in the Run() function consists of one big switch() statement. Each case terminates with a “return X” line where X is the state with which it wants the Run() function to be called when the answer arrives. The switch statement will then continue at “case X” when the answer arrives and voila, we have a cooperative Job system.

Of course there are many more intricate details which I won’t elaborate here and now, but suffice it to say that this system works beautifully and, when checking the processor load, we can see that all processor cores can be stretched to the full load with only very minimal calling of system code since there are no context switches necessary. All threads run constantly. A true time sharing system.

This is only half the system though. Our Job system is divided into two (actually now three) basic types of Jobs. One is called a “Server Job” and the other “Client Job”.

A server Job is a job that defines service requests and can be called from client Jobs. The server then wakes up when a request arrives, carries it out and returns to sleep. Server Job’s don’t have states in the above sense, because that would be detrimental to their responsibility of servicing many client Jobs with little wait time. Their task must be finished within one wake-up call, including sending an answer.

There are only few server Jobs and they are all what’s called a singleton. They exist only once in the system. A good example is a database Job that provides access to database entries and performs transactions on them to alter entries.

Client Jobs on the other hand are a dime a dozen. Whenever a user sends a request to our game server for instance, we create a Client Job that handles this request. The Client Job may even call other client Jobs to handle different aspects of the user request in parallel. These Child Jobs can be fire and forget or synchronized with the calling Client Job who then waits for the child Job to finish.

Finally we have recently been adding a type of “Hybrid Job” which is a Server Job that can run it’s own local Client Jobs, in case that a Server Job needs to call other Jobs and wait for an answer. It can’t really do that in the main loop, but uses it’s own local Client Jobs to do that.

All in all this system has been stress tested with 10s of thousands of Jobs in parallel. As long as one state of a Job does not occupy it’s thread for a lengthy period of time but rather performs short bursts of computation and then waits for an answer or new data, the reaction time is spectacular. We are using this system in our new game server now with great success. Not only does it allow us to use Multi Threading efficiently but it also makes it EASY to use since the programmer does not need to deal with low end constructs like signals and mutexes anymore. By creating our own database service based with this system we have also solved the problem of deadlocks when altering game objects.

Of course this system is currently Object Oriented and would have to be rewritten in order to work with a procedural programming paradigm but that should be easily possible.

In my opinion, any new language that is designed to use multi tasking must incorporate a system like this.

No Global Variables

WHAT? Yes that’s what I thought too when I came up with that idea. How am I supposed to implement a service code, like a memory database that holds objects to be fetched by other program parts? Yes at first glance it seems weird, but it can be done.

Think about it: Everywhere we hear the mantra “don’t create functions with side effects!” – And what better side effect is there than a function that modifies a global variable which in turn is used in a lot of other locations? Besides, if you want to use Unit Tests, how do you test this scenario?

So I agree completely with this axiom, that side effects are bad. And the availability of global variables invites the programmer to use them. “Oh just this once, since it’s easier than to redesign this code”. Yes and once more and again and again and before you know it, it’s a common practice.

My solution to this problem is NO GLOBAL VARIABLES PERIOD!

By eliminating them, the language will enforce clean coding by leaving the designer no choice but to have clearly defined Module interfaces.

Variables can only be declared inside a module and are available only inside the module in which they were declared. The can then be passed as actual parameters in a module call and thus be made available to a module, but they don’t outlive the lifetime of the module itself, or in other words, they exist only for as long as the module is running.

This makes Unit Testing a breeze. There are no global variables, there are no C++ objects with state variables, the Module is COMPLETELY defined by it’s actual parameters, NO SIDE EFFECTS AT ALL.

Imagine there is a huge hierarchy of modules which solve a complex problem. There will certainly be the need for something like “permanent variables” for instance a file could be permanently open as a log output. The file handle will then have to be created on the top level of the hierarchy and passed to the lower modules through the parameter interface.

Will this make the interface larger? No doubt. There are ways to keep interfaces neat and clean for instance by creating a data type that combines this handle with other commonly needed data in a record so that the parameter count can be minimized, but these are stylistic questions. I have heard some people argue that a function should never have more than one or two parameters. Obviously that will not be the case here, especially not at higher levels. Lower level Modules will tend to have fewer parameters than higher level ones, but there is nothing wrong with that. It’s just the way things are. If data are needed, they have to be brought to the location where they are needed somehow. Either through the parameter interface or through global variables or object state variables. And since we eliminate the latter two, there is only the Module interface.

I believe this will eliminate a lot of common problems and causes for hard to find bugs.

Forced Modularization

I assume you are familiar with the vague term “module” from Structured Design. This is not to be confused with the module from Modula-2. Our modules are basically small functions or procedures from other languages. Everything that would be “put in a box” in a Structure Chart would be a module.

There are generally two types of modules: Control Modules and Work Modules, both subsumed under the term “Module”.

A Control Module should only be able to call other modules and perform comparisons, for instance “if a=2 then call module X else call module Y” but should not be able to perform actual program modifications or calculations in itself.

A Work Module on the other hand has the complete feature set of the language available in order to perform all sorts of calculations or operations, but it is NOT allowed to call other Modules.

This may look strange at first glance. For instance, if a module processes a number of records from a datafile, say to print out a list of records which are due for payments, shouldn’t it be able to open the datafile, read one record after the other and close the file? Surely, the OpenFile or ReadRecord functionality are pretty complex and need to be encoded with a hierarchy of “modules” themselves?

Well that is true but in this case, perhaps the complete “module” that does the printing should itself be split up in a hierarchy of Control Modules and Work Modules in the first place with only small functionality like “Check if this record needs to be printed” or “Calculate number of days before due date” and so on. When you believe, that your Work Module should be able to call other modules in order to perform it’s duty, chances are, you have not dissected the problem well enough and that your “Module” is really a Frankenstein module that should be split up into a hierarchy of modules.

Clearly, I advertise for very small Work Modules and the reason is, the smaller a module is, the easier it is to test.

What about Functions? As in “a := root( 10,2)”? Functions should be just a specialized form of a Work Module which happens to return a value. Other than that, they should obey the same rules as Work Modules, with the exception that they can call other functions as well, since especially mathematical functions often need to use other mathematical functions. Furthermore, Work Modules can call Functions, but the return value of a function can not be tossed as in C where a function can be called without using the return value. And yes, this softens up the rule the Work Modules must not call other modules, but I believe this is necessary for mathematical or other expressions to be coded in a reasonable manner.

The syntax for modules should be created in such a way, that these rules can be enforced by the compiler easily. Furthermore it should be made thus, that multiple modules can be placed in one source file in order to make writing the program, or coding, easier. I would advise against this though and place each module into it’s own sourcefile, but I realize that many programmers would take this as too great a restriction. However, should there ever be an IDE that supports this type of workflow, it would also be prudent to enforce each module to have it’s own sourcefile.

It may not be necessary to split modules into two files, like a Definition and an Implementation file as in Modula-2 or .H and .CPP files as in C++ since the compiler can extract the calling convention from the module and store them into a binary file which will later be imported by those modules that use the called module.

This also implies, that binding of modules is done with a binary interface, not with an include system like in C++ where each definition part of a function is compiled whenever the .H file is included

A hierarchy of Control and Work Modules with one Control Module at the top, a tree of Modules, can be combined into a form of package called a “Subprogram”. This Subprogram can thus be compiled and linked into one binary or rather two binary files (the OBJ file for the linker and one with the binary link information for the compiler) and then handed out to other programmers or put into a library of Subprograms.

Features 1

This section and the following will describe the main features very briefly. The descriptions won’t point out solutions as to how to implement the respective features, nor is it always clear whether the features can be achieved with reasonable effort or not, but is rather a form of “whishlist” of all the things I deem necessary for a new gaming languages

Strict Division Between Code And Data

What? Heresy! Yes exactly, that’s the first reaction. After all, ever since Bjaerne Stosstrupp has brainwashed us all into believing that Object Orientation with C++ is the nonplusultra, it is clear to anybody that code and data belong together.

However, I have several problems with that. One was already mentioned in the Intro section: If you want to keep code and associated data close together, oftentimes it is not clear, where the code belongs. Take a simple example in C++ like this:

A function A should manipulate the classes X and Y in a way that the changes made to X and Y are pretty much equally important and equally work intensive. Where does the Function A go? Will it be a member function of X and therefor access the data in Y through access functions in Y or the other way around? This is a relative common problem in my experience and my decisions were always arbitrary so that I could not really justify them before myself.

Often it isn’t even clear that a code section should be part of any C++ class at all. Some code is more like a tool code that shouldn’t be part of an interface or should be visible to the programmer in the IDE in close conjunction with the data structure at all. So it is better if the code doesn’t have to be artificially associated with a certain class, or, worse, be made a global function thus breaking the OO style used for all the classes.

Here is another big, big problem that is specific to a certain type of application, mainly client/server type programs, which is what I spend the last 15 years dealing with:

Suppose you have a datastructure X. This data is to be shared between the client and the server. On the client this data is only read. For instance, imagine the data that describe a character in an RPG and which would typically contain such things as hit points, mana, the name, speed, encumbrance, the complete inventory the character is carrying and so on.

Now, the client will read this data in order to render the respective information but will not alter them. The server on the other hand will have to do quite extensive data manipulation which can run into tens of thousands of lines of code. Needless to say that this code will not be “self contained”. It will access lots and lots of code that is part of the server. Obviously this code will not compile for the client application.

So what’s the solution? The naive C solution is to use #ifdef’s and some definition constant like SERVER and CLIENT and then compile the code conditionally. I did that with my first client/server game and it was a huge mess in the end. So for my second project I decided to split each data element in two. Say for instance you organize your character in smaller subobjects to keep things neat, for instance a subobject “Fight_Data”. Now, I split the data from the code and created a data class which depends on nothing that is not available on the client, just put some bare bone essentials into it for instance read and write functions so the data could be stored and loaded. The server then has a “Fight_Data_Work class which contains the “Fight_Data” data class and has access to it’s fields as if it were a part of the server class. That “Fight_Data_Work” then contains all the code that does the heavy lifting and ties the data into the server system.

So the next logical step in my eyes would be to completely separate code and data, as it was done in Pascal or Modula for instance.

Now I hear the screams about “but what about data protection”. Yes what about it? In most classical C++ cases, a class will have access functions to give access to each class elements (getter and setter). These are not protected. Any rogue programmer could access these just like the raw data elements. So why write these access wrappers at all? They just cost additional time.

Besides, we’re not dealing with a fortress here that is to be fortified against the enemy. All programmers that have access to the structure work for the same company, or they have bought your library to accomplish something. So we can safely assume, we’re all on the same side. And if your company has a numbnut programmer that constantly causes mayhem, maybe it’s easier to exchange that guy instead of putting “protection” mechanisms into the code.

So I have no problems whatsoever with open structures. There is however a way, to protect data in structures, even way more efficient than the getter and setter functions in C++. This is an option that may be part of the language or not. I will address this later after some other topics have been discussed.

Why Not Create A New Language For Games?

Why do we need a new programming language?

Programming languages are nothing but tools to solve a certain problem. I was always suspicious of claims that a language is a “universal language” that is suited for all sorts of problems. C++ is perhaps the most notorious of those languages. I don’t want to bring all the arguments against C++ that have already been printed lots of times elsewhere. I think languages should be narrower in their purpose which should also make them better suited for the given purpose. Better than a “universal” language in any case.

My background is games programming and in this field the standard solution is, unfortunately C++. There is virtually no alternative. Some game engines, most notably Unity, use C# but that is even worse since it focuses more on the Windows platform, although I believe, there are free C# implementations on Linux. However, C# has other disadvantages, like the Garbage Collector. Garbage collection is the worst of all ideas, mainly because it can defer object destruction to a later point in time, so that the programmer has absolutely no control over the resources to be freed.

Now, back to my original argument. I have been working on games for over 20 years now and I have used C and C++ and I can’t say that I had real fun moments with these tools. There are always problems for which these languages don’t provide out-of-the-box solutions and then there come the libraries. The multitude of libraries out there is staggering, many doing the same things, often overlapping, some doing certain things better, other doing other things better, some buggy here, others buggy there. Some libraries require other libraries, perhaps in a version that doesn’t work on your specific configuration or with a system that a customer might have, which is even worse.

Third party libraries are a nightmare. Of course, one will eventually need some, for instance a fast 3D system is necessary for games and nobody will seriously attempt to write their own, but having to use libraries for simple things like containers, strings, file or multi processing? Here again there are many to chose from, and even though the STL has greatly unified things, it is still a library and it is hellishly complicated, mainly because it does things, which were not considered in the original language definition and now, the library has to rape the language in order to make those things happen. Anybody who has ever dealt with templates of templates in connection with inheritance knows how quickly the STL can get to a level of complication that makes it difficult to write, and more difficult to read programs.

So, one of the things a language should do is to provide all the tools necessary that a programmer will need in his day to day life, so that there is no need for a third party library. This makes the language much larger, but the extra features are rather elementary and the language does not need to provide highly sophisticated abstract concepts so that the same features can be programmed outside of the language. They are intrinsic.

Another flaw of today’s languages is, for those who have experienced old time Turbo Pascal or maybe even Home Computer Basic versions, that they are incomplete with regard to input and output. There are no features for text input and output nor for GUI programming. Again, these things are handled by third party libraries, but what it means for a novice is, that he will not only have to learn the language and it’s probably very complex abstract concepts, but also will have to learn a third party library and possibly the peculiarities of the underlying OS, merely to open a window and perform some input and output. In the old days, one read the Basic manual that came with the computer and was in principle able to write a small game right after finishing the book. That was my experience with several home computers over the years. And the same with Turbo Pascal. A language should contain IO features to handle test related formatting in the console, at least to the level of the Ncurses library and also graphical capabilities. For this, I propose an integral implementation of the X-Windows client library together with a windows manager so that it will be possible to create windows and perform simple text IO as well as drawing commands. Again, this should not go to a level where a 3D game can be programmed with intrinsic commands, but at least mid level GUI applications should be possible with the system.

Today’s computers all have a number of processor cores. Mutlithreading and multi processing programming is a must. Server applications will even need to integrate multiple computers over a network. Therefore Multitasking programming should be part of the language and not only to the extend that a thread can be started and a signal or mutex set, but a full featured system of multiple agents that run in threads and in different processes, locally or remotely and which can exchange messages and enable RPC, all within the language. Of course this requires, that the language also provides a complete network interface.

Finally, another flaw, in my eyes, is today’s focus on Object Oriented Programming. Again, there have been many articles written about the problems of OOP, just search for “OOP sucks” on the web or take a look at this article that describes in great detail the organization of the underlying system in OOP which make it very slow: (http://harmful.cat-v.org/software/OO_programming/_pdf/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf)

OOP often leads to a huge callstack which is difficult to debug because it is not always clear why a message was sent from one object. Also, in C++ programmers tend to squeeze everything into the class declaration, which results in a list of functions in the IDE which are mostly specialized functions that are part of other functions, quasi a structural breakdown, yet the observer sees all the interfaces in the IDE and is overloaded with interfaces that do not concern him.

Another problem with OOP is the “status” of the object. OOP function interfaces often receive part of their operational data through the function interface and take others from the object itself, which is natural for this type of programming, but it definitely makes Unit Testing more difficult since the data, that define the operation of a function, do not all come through the interface and thus, to test a function in a C++ object, oftentimes the status variables in the C++ class have to be simulated for the test. Not a good solution.

Plus, when defining a function that takes, say two different classes as parameters, it is often not clear whether the function is part of the first class or the second, One can often argue equally strong for both solution and as such, the decision, into which class the function goes, is often arbitrary.

I tried some other languages recently, mostly Ada and Embarcadero Delphi, which is essentially Turbo Pascal, but Ada is vastly over engineered and it’s syntax is even more difficult to get right (just check the Ada section of stackoverflow.com and the type of questions people have who get vexed by unexpected behavior of Ada features) and Delphi suffers from it’s own third party hell where such libraries are also necessary and not easy to install. Besides, Delphi only exists for Windows which is a restriction that I don’t want to accept.

I’m sure there are many more reasons, but I hope the reader will understand why I think that no language on the market fits all my desires, so I really think it makes sense to think about a new language, that is better suited for games and application development.

Now, the reader might think, this is a fools errand, and that would have been true a few years ago. Today, there is LLVM which is a compiler building toolkit that takes care of all the heavy lifting. All that is left is, to define the language and program the front end, that translates the language elements into a precisely defined Intermediate Language (IR). From that point on, LLVM takes over, handles the optimization and the code generation for different processors and Operating Systems. With this toolkit, it is even possible for people like myself, who’s focus has not been compiler building to write a decent compiler as many projects out there show. Many average people have undertaken the task of creating “toy languages”, probably so called because they are afraid of being mocked when they say they “write their own language”. Perhaps they think, LLVM’s promises are too good to be true, but I don’t think they are. I have already played around with LLVM to the point where I could parse complete expressions and print results to the screen via a small runtime library which I wrote in C.

I used Lexx and Yacc to scan the source language, but boy are these two tools screwed up. They work, once you have them going but if you have a mistake in your scripts, god may help you. But writing a lexer and a parser for a specific language isn’t a big problem, so I think, it’s better to write specific code for these parts directly in C. But perhaps I find better alternatives out there and use them.

So LLVM is there, it works, it’s well maintained, even Apple uses it as basis for their C++ compiler, so it must be good. I think it is possible to define and write a language that is better suited for my specific tasks than C++ and that’s what I want to do over the next years. I will post short articles here that describe my progress whenever appropriate. Once I come to the point where I eventually develop code, I will provide all my code as Open Source on one of the share systems out there (https://en.wikipedia.org/wiki/Comparison_of_source_code_hosting_facilities) in the hope that some people might find my work useful and will perhaps check it out.

As to my timeline, I am fully aware that this is a lengthy task, especially for somebody working alone and doing it on the side after my main work, so it will be several years probably. Do I expect to finish this task successfully? Not really, but I want to try it at least and if everybody would just let go of any idea that sounds ludicrous, nobody would ever invent anything.

Coherence of Data and Code. Is it a mistake?

Just a brief reflection on something that has puzzled me for quite a while now.

I learned programming the old way, Basic, Assembler, Pascal, Modula-2, C and then C++ that’s my path. Except for C++ (with which I have been stuck for 20 years or so) the principle of keeping data and code together wasn’t really a topic. In fact, there was little that forced you to keep them together until C++ became popular.

Today, it’s a religious dogma. You must keep the data and the code together in a class. Don’t make your data elements public, then anybody can access them! Implement accessor functions instead. Nobody doubt’s this these days, but I have run into situations where this dogma is literally holding me back. Let me explain.

Making client server games the old fashioned way (meaning with a dedicated client application, not a web browser) provides you with the following challenge: Oftentimes you have to define data that is to be used on the client and the server. But of course, the code won’t fit. the code that access the data on the server often has references to other objects or code that only exists on the server and therefore can not be compiled or linked with the client, and vice versa.

So in my previous game I used the #ifdef statement extensively to make those classes compile one way on the server and another way on the client. This is of course a terrible crutch.

In my current game I came up with the solution to have a shared link library that holds the data. Then I add classes on the client and the server which have this data structure as a private member and then provide the separate access and computation routines for the client and the server.

But this simply tells me that the dogmatic “unity of code and data” is nothing that is natural. I am sure there are other example where code and data should be separate.

Currently there is no good solution, but I just want to submit as a suggestion that the religious view that code and data are inseparable may be false.

Another Example Of Bad Technology

Take a look at this article. It takes about the bugs that still plague today’s browser version. And is it a miracle? These browsers not only have to incorporate code to make different versions of Html render correctly, but also ActiveX components, run java, and a whole host of other crap I don’t even know about. Browsers are one of the biggest monstrosity in Software. That’s why I use Firefox. Although it may not be much better in that regard, at least it is Open Source so people can check (although few do I imagine 😀 ) that there is no spyware in it.

My main point is however that today’s software development is so terribly complicated due to the requirement for interconnection of different software, different device types, different problem domains, different networks, that it gets slower and slower and more and more vulnerable. Did you know that the DOW (or DOD as it is officially called these days) considers cyber attack the number one thread to the safety of the USA? What does it say about the quality of today’s software when we have to be more afraid of a hacker than we were of the Soviet Union?

And if you don’t agree, check out this article here before you fly again.