Show simple item record

dc.contributor.authorHuang, Andrew "bunnie"en_US
dc.date.accessioned2004-10-20T20:29:51Z
dc.date.available2004-10-20T20:29:51Z
dc.date.issued2002-06-01en_US
dc.identifier.otherAITR-2002-006en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7096
dc.description.abstractThe furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.en_US
dc.format.extent299 p.en_US
dc.format.extent13404896 bytes
dc.format.extent2307234 bytes
dc.format.mimetypeapplication/postscript
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.relation.ispartofseriesAITR-2002-006en_US
dc.subjectAIen_US
dc.subjectHPC parallel computer architecture queues fault tolerance programmability ADAMen_US
dc.titleADAM: A Decentralized Parallel Computer Architecture Featuring Fast Thread and Data Migration and a Uniform Hardware Abstractionen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record