Compiler modified to boost parallel processing efficiency

1 min read

According to researchers from MIT, code written explicitly to take advantage of parallel computing usually loses the benefit of a compiler’s optimisation strategies. The reason, they say, is because managing parallel execution requires a lot of extra code and that existing compilers add this code before optimisation.

However, the team from MIT’s Computer Science and Artificial Intelligence Laboratory will describe at a forthcoming ACM seminar a modification to an open-source compiler that optimises before adding the code necessary for parallel execution.

As a consequence, said Professor Charles Leiserson, pictured, the compiler will optimise parallel code ‘better than any commercial or open-source compiler and it compiles where some of these other compilers don’t’.

A typical compiler has three components: the front end, tailored to a specific programming language; the back end, tailored to a specific chip design; and the middle end, which uses an ‘intermediate representation’, compatible with different front and back ends. In a standard serial compiler, says the team, optimisation happens in the middle end.

To test their system, the researchers built two versions of the LLVM open-source compiler. In one, they left the middle end alone, but modified the front end to add Cilk runtime invocations; in the other, they left the front end alone, but implemented a fork-join intermediate representation in the middle end, adding the runtime invocations only after optimisation.

Then they compiled 20 Cilk programs on both compilers. In 17 of the 20 programs, the compiler using the new intermediate representation yielded more efficient software, with gains of 10 to 25% for a third of them. For programs where the new compiler yielded less efficient software, the falloff was less than 2%.

The researchers believe their approach should make it more straightforward to add optimisations tailored specifically to parallel programs, a development they say will be ‘crucial’ as more cores are added to processors.

Prof Leiserson said the idea of optimising before adding the extra code required by parallel processing has been around for decades. “But compiler developers were sceptical that this could be done. Everybody said it was going to be too hard, that you’d have to change the whole compiler. [Our team] basically showed conventional wisdom to be ‘flat out’ wrong.”