For a computer scientist there is not a lot that satisfies you more than realizing that things work exactly the way you want them to. Even more so, when you have spent quite some quality time on coming that far.
Today I've given the parallel literal propagation module a shot and benchmarked it with some of last years benchmark files. What I see is, that my parallel algorithm scales very well for increasing number of cores or threads. On a fixed number of cores, it even cotinues scaling further when starting more threads than cores are available. This indicates good cache usage, but so far I've not investigated. After all, the speedup is about 5,3 (in comparison to the single-threaded version). I disinguish two different forms of the term 'efficiency' (which relates to the fact, that I can schedule multiple threads on a single processor core). In the traditional sense, the efficiency for the speedup of 5 accounts to 69%, which is an excellent score.
Hopefully, I'll soon be back with further results that prove me right - I'm moving towards parallel unit propagation and conflict detection.
Abonnieren
Kommentare zum Post (Atom)
Keine Kommentare:
Kommentar veröffentlichen