How to avoid interferences in memory accesses on multi-core systems
Commercial off-the-shelf multi-core processors are inescapable nowadays: they are considered to be cheap and powerful. It is common to see processors with four cores or more and several levels of cache in embedded systems (see the figure).
Powerful? It depends. For sure such a platform offers many cores to perform computations simultaneously. But a major issue in critical systems is their lack of predictability. There are a number of bottlenecks, the resources shared between the cores. A typical example is memory accesses. All the cores access the main memory through a shared bus. Concurrency between cores is managed by a hardware bus arbiter that has no awareness of priorities at the level of the application. This causes several types of interferences at run-time.
We have to distinguish between two situations:
1. Either the data and the code are present in the cache(s): the instruction can be executed independently of the other cores without interference;
2. Otherwise the core needs to access the main memory before execution and can wait until other cores complete their memory access in an unpredictable way.
Let’s illustrate that frequent bad situation for a 3-core processor:
At the beginning three programs run (execution, E on the picture) in parallel. Imagine that these programs want to access the central memory via the bus at the same time. From that instant three cores have to access the main memory simultaneously! The hardware arbiter will process these accesses sequentially (M on the picture) and we lose two cores for an accumulative time equivalent to three memory accesses.
A technique exists to avoid such problem, i.e., to avoid simultaneous memory accesses: memory centric scheduling.
Memory Centric Scheduling
Memory centric schedulers are high-level schedulers that aim to limit or avoid concurrent memory phases. This way, contention in the shared memory subsystem can be resolved by software, avoiding to rely on low-level unpredictable arbiters.
With the memory centric scheme the applications have two possible coarse-grained states: memory phases (M-Phases) and execution phases (E-Phases). During M-Phases, the application accesses the shared main memory. It will then use the cached data and instructions during the E-Phases with no access to the shared memory.
The memory phases are made of prefetch or load instructions that load the data and instructions needed by a subsequent execution phase into the cache, or write-back instructions to copy updated data from the cache to the main memory.
In this example the application prefetches all the data during the M-phases and executes without memory contention during the E-Phases!
One challenge is to design the memory centric scheduler in order to have at most one M-phase at a time. Another challenge is how to decompose your programs in sequences of M- and E-phases.
We will focus on these challenges in a following article. In particular, we will consider real-time systems where the applications are recurrent (periodic). We will also present a technique to schedule the M- and E-phases based on a periodic application model by managing their priorities.
I would also like to have your opinion. Do you know of any other method to avoid situations where you lose core capacity?
 Rodolfo Pellizzoni, Emiliano Betti, Stanley Bak, Gang Yao, John Criswell, Marco Caccamo, and Russell Kegley. “A Predictable Execution Model for COTS-Based Embedded Systems”. In 17th IEEE Real-Time and Embedded Technology and Applications Symposium, pages 269–279. IEEE, April 2011.
 Gang Yao, Rodolfo Pellizzoni, Stanley Bak, Emiliano Betti, and Marco Caccamo. “Memory-centric scheduling for multicore hard real-time systems”. Real-Time Systems, 48(6): 681–715, November 2012.
 Claire Maiza, Hamza Rihani, Juan M Rivas, Joël Goossens, Sebastian Altmeyer, and Robert I Davis. A Survey of Timing Verification Techniques for Multi-Core Real-Time Systems. Verimag Research Report TR-2018-9 (Technical Report), 2018. Submitted for publication.