But this year with the release of the Intel's i5, i7 and AMD's x6 series processors have proved that the count of cores are increased. Has technology stopped ? No, it will never. According to the recent researches say that they are not the ultimate solution. So are 'YOU' ready to afford them !
A group of MIT researchers are presenting a paper today at the USENIX Symposium on Operating Systems Design and Implementation in Toronto, titled "An Analysis of Linux Scalability to Many Cores," that details research about how very large numbers of cores affect processing performance. The paper may deal with Linux specifically, but it's a helpful reminder to everyone about the challenges facing hardware and software design in today's computing world and how important it is that these problems get solved soon.
The problem, according to the researchers, starts appearing with systems bearing numbers of cores in the dozens. They built a system in which eight six-core chips effectively mimicked one 48-core chip, then observed what happened in a lengthy series of tests. The large number of cores may have made for a blazing-fast system, but it was still slower than it should have been. The reason? This story from MITnews explains:
In a multicore system, multiple cores often perform calculations that involve the same chunk of data. As long as the data is still required by some core, it shouldn't be deleted from memory. So when a core begins to work on the data, it ratchets up a counter stored at a central location, and when it finishes its task, it ratchets the counter down. The counter thus keeps a running tally of the total number of cores using the data. When the tally gets to zero, the operating system knows that it can erase the data, freeing up memory for other procedures.According to the paper, the problems were often to due to issues with cache:
As the number of cores increases, however, tasks that depend on the same data get split up into smaller and smaller chunks. The MIT researchers found that the separate cores were spending so much time ratcheting the counter up and down that they weren't getting nearly enough work done.
Many scaling problems manifest themselves as delays caused by cache misses when a core uses data that other cores have written. This is the usual symptom both for lock contention and for contention on lock-free mutable data. The details depend on the hardware cache coherence protocol, but the following is typical. Each core has a data cache for its own use. When a core writes data that other cores have cached, the cache coherence protocol forces the write to wait while the protocol finds the cached copies and invalidates them. When a core reads data that another core has just written, the cache coherence protocol doesn't return the data until it finds the cache that holds the modified data, annotates that cache to indicate there is a copy of the data, and fetches the data to the reading core. These operations take about the same time as loading data from off-chip RAM (hundreds of cycles), so sharing mutable data can have a disproportionate effect on performance.
The conclusion, according to the MITnews story, is that "slightly rewriting the Linux code so that each core kept a local count, which was only occasionally synchronized with those of the other cores, greatly improved the system's overall performance." But will what works for modern-day Linux, and other operating systems, also work when the number of cores goes up even more? Past 48, according to one of the researchers, "new architectures and operating systems may become necessary."
PAPER PRESENTATION BY 'MIT' RESEARCHES
Follow us on facebook and twitter @demonstech. Any problems regarding PC or any Gadgets Write here @DEMON'STECH,
Its our pleasure to reply.