In the comp.arch newsgroup, we've been following a heated discussion about Parallelism. It's focused on the question of designing software to run on multiple cores, either with shared memory or message passing.
We're of the opinion that the compiler can assist the developer in this task. After all, the compiler knows what is (or could be) in memory at any one moment.
In current and future Code Development Systems we are implementing the ability to export an inter-processor interface from one processor to the build of the application code on a second processor. This functionality does three things:
- It almost eliminates inter-processor errors.
- The compiler can enforce inter-processor protocols.
- Individual processor code can be developed by separate teams with inevitably different schedules. Using exported function interfaces allows different code releases on each processor to be integrated into application builds.
We offer a
#pragma write directive that emits arbitrary strings to an external file during compilation. This allows free-form code generation, to create target interface libraries for a host firmware.
When these strings are emitted, we expand a variety of
::CPU macros. The macros expose a lot of information that the compiler knows about the program it's compiling. This is information that the host firmware needs to know: the location of variables in shared memory, for instance.
Most process control applications have a lot of inherent parallelism. It is worth looking at the IEC/ISO 61131 and IEC/ISO 61499 families of languages. Their primary goal is to describe the problem at a functional level and the interconnected control and data structures rather than the application implementation (aka C).