We are discussing proposal to split a big C++ program into multiple separate executables that would communicated using shared memory. The shared data structures are large, so we do not want to use loopback network or any other approach that would just copy them.
The arguments for splitting are that every part can be developed separately, potentially replacing it with the alternative implementation, even in another language. It would naturally prevent accessing private data and code and the processes would obviously run in separate threads.
The arguments against would be that C++ has built-in means to structure also a large and complex project, hiding data and functions as designed. It is possible use C++ multithreading to employ all cores of the CPU. In this case the data can be passed by reference from module to module without tricks.
Is there a known widely accepted view about dividing a C++ program into multiple binaries running in parallel on the same host? Do any widely known programs work this way?
Suggestions to implement in another language are outside the scope of this question.