Language:
繁體中文
English
日文
說明(常見問題)
南開科技大學
圖書館首頁
編目中圖書申請
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Chip Multiprocessor ArchitectureTech...
~
Hammond, Lance.
Chip Multiprocessor ArchitectureTechniques to Improve Throughput and Latency /
紀錄類型:
書目-電子資源 : 單行本
正題名/作者:
Chip Multiprocessor Architecture/ by Kunle Olukotun, Lance Hammond, James Laudon.
其他題名:
Techniques to Improve Throughput and Latency /
作者:
Olukotun, Kunle.
其他作者:
Hammond, Lance.
面頁冊數:
VIII, 145 p.online resource.
Contained By:
Springer Nature eBook
標題:
Electronic circuits. -
電子資源:
Fulltext (查閱電子書全文)
ISBN:
9783031017209
Chip Multiprocessor ArchitectureTechniques to Improve Throughput and Latency /
Olukotun, Kunle.
Chip Multiprocessor Architecture
Techniques to Improve Throughput and Latency /[electronic resource] :by Kunle Olukotun, Lance Hammond, James Laudon. - 1st ed. 2007. - VIII, 145 p.online resource. - Synthesis Lectures on Computer Architecture,1935-3243. - Synthesis Lectures on Computer Architecture,.
Contents: The Case for CMPs -- Improving Throughput -- Improving Latency Automatically -- Improving Latency using Manual Parallel Programming -- A Multicore World: The Future of CMPs.
Chip multiprocessors - also called multi-core microprocessors or CMPs for short - are now the only way to build high-performance microprocessors, for a variety of reasons. Large uniprocessors are no longer scaling in performance, because it is only possible to extract a limited amount of parallelism from a typical instruction stream using conventional superscalar instruction issue techniques. In addition, one cannot simply ratchet up the clock speed on today's processors, or the power dissipation will become prohibitive in all but water-cooled systems. Compounding these problems is the simple fact that with the immense numbers of transistors available on today's microprocessor chips, it is too costly to design and debug ever-larger processors every year or two. CMPs avoid these problems by filling up a processor die with multiple, relatively simpler processor cores instead of just one huge core. The exact size of a CMP's cores can vary from very simple pipelines to moderately complex superscalar processors, but once a core has been selected the CMP's performance can easily scale across silicon process generations simply by stamping down more copies of the hard-to-design, high-speed processor core in each successive chip generation. In addition, parallel code execution, obtained by spreading multiple threads of execution across the various cores, can achieve significantly higher performance than would be possible using only a single core. While parallel threads are already common in many useful workloads, there are still important workloads that are hard to divide into parallel threads. The low inter-processor communication latency between the cores in a CMP helps make a much wider range of applications viable candidates for parallel execution than was possible with conventional, multi-chip multiprocessors; nevertheless, limited parallelism in key applications is the main factor limiting acceptance of CMPs in some types of systems. After a discussion of the basic pros and cons of CMPs when they are compared with conventional uniprocessors, this book examines how CMPs can best be designed to handle two radically different kinds of workloads that are likely to be used with a CMP: highly parallel, throughput-sensitive applications at one end of the spectrum, and less parallel, latency-sensitive applications at the other. Throughput-sensitive applications, such as server workloads that handle many independent transactions at once, require careful balancing of all parts of a CMP that can limit throughput, such as the individual cores, on-chip cache memory, and off-chip memory interfaces. Several studies and example systems, such as the Sun Niagara, that examine the necessary tradeoffs are presented here. In contrast, latency-sensitive applications - many desktop applications fall into this category - require a focus on reducing inter-core communication latency and applying techniques to help programmers divide their programs into multiple threads as easily as possible. This book discusses many techniques that can be used in CMPs to simplify parallel programming, with an emphasis on research directions proposed at Stanford University. To illustrate the advantages possible with a CMP using a couple of solid examples, extra focus is given to thread-level speculation (TLS), a way to automatically break up nominally sequential applications into parallel threads on a CMP, and transactional memory. This model can greatly simplify manual parallel programming by using hardware - instead of conventional software locks - to enforce atomic code execution of blocks of instructions, a technique that makes parallel coding much less error-prone. Contents: The Case for CMPs / Improving Throughput / Improving Latency Automatically / Improving Latency using Manual Parallel Programming / A Multicore World: The Future of CMPs.
ISBN: 9783031017209
Standard No.: 10.1007/978-3-031-01720-9doiSubjects--Topical Terms:
152863
Electronic circuits.
LC Class. No.: TK7867-7867.5
Dewey Class. No.: 621.3815
Chip Multiprocessor ArchitectureTechniques to Improve Throughput and Latency /
LDR
:05424nmm a22003735i 4500
001
1000127461
003
DE-He213
005
20220601134825.0
007
cr nn 008mamaa
008
220601s2007 sz | s |||| 0|eng d
020
$a
9783031017209
$9
978-3-031-01720-9
024
7
$a
10.1007/978-3-031-01720-9
$2
doi
035
$a
978-3-031-01720-9
050
4
$a
TK7867-7867.5
072
7
$a
TJFC
$2
bicssc
072
7
$a
TEC008010
$2
bisacsh
072
7
$a
TJFC
$2
thema
082
0 4
$a
621.3815
$2
23
100
1
$a
Olukotun, Kunle.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1000149554
245
1 0
$a
Chip Multiprocessor Architecture
$h
[electronic resource] :
$b
Techniques to Improve Throughput and Latency /
$c
by Kunle Olukotun, Lance Hammond, James Laudon.
250
$a
1st ed. 2007.
264
1
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2007.
300
$a
VIII, 145 p.
$b
online resource.
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
347
$a
text file
$b
PDF
$2
rda
490
1
$a
Synthesis Lectures on Computer Architecture,
$x
1935-3243
505
0
$a
Contents: The Case for CMPs -- Improving Throughput -- Improving Latency Automatically -- Improving Latency using Manual Parallel Programming -- A Multicore World: The Future of CMPs.
520
$a
Chip multiprocessors - also called multi-core microprocessors or CMPs for short - are now the only way to build high-performance microprocessors, for a variety of reasons. Large uniprocessors are no longer scaling in performance, because it is only possible to extract a limited amount of parallelism from a typical instruction stream using conventional superscalar instruction issue techniques. In addition, one cannot simply ratchet up the clock speed on today's processors, or the power dissipation will become prohibitive in all but water-cooled systems. Compounding these problems is the simple fact that with the immense numbers of transistors available on today's microprocessor chips, it is too costly to design and debug ever-larger processors every year or two. CMPs avoid these problems by filling up a processor die with multiple, relatively simpler processor cores instead of just one huge core. The exact size of a CMP's cores can vary from very simple pipelines to moderately complex superscalar processors, but once a core has been selected the CMP's performance can easily scale across silicon process generations simply by stamping down more copies of the hard-to-design, high-speed processor core in each successive chip generation. In addition, parallel code execution, obtained by spreading multiple threads of execution across the various cores, can achieve significantly higher performance than would be possible using only a single core. While parallel threads are already common in many useful workloads, there are still important workloads that are hard to divide into parallel threads. The low inter-processor communication latency between the cores in a CMP helps make a much wider range of applications viable candidates for parallel execution than was possible with conventional, multi-chip multiprocessors; nevertheless, limited parallelism in key applications is the main factor limiting acceptance of CMPs in some types of systems. After a discussion of the basic pros and cons of CMPs when they are compared with conventional uniprocessors, this book examines how CMPs can best be designed to handle two radically different kinds of workloads that are likely to be used with a CMP: highly parallel, throughput-sensitive applications at one end of the spectrum, and less parallel, latency-sensitive applications at the other. Throughput-sensitive applications, such as server workloads that handle many independent transactions at once, require careful balancing of all parts of a CMP that can limit throughput, such as the individual cores, on-chip cache memory, and off-chip memory interfaces. Several studies and example systems, such as the Sun Niagara, that examine the necessary tradeoffs are presented here. In contrast, latency-sensitive applications - many desktop applications fall into this category - require a focus on reducing inter-core communication latency and applying techniques to help programmers divide their programs into multiple threads as easily as possible. This book discusses many techniques that can be used in CMPs to simplify parallel programming, with an emphasis on research directions proposed at Stanford University. To illustrate the advantages possible with a CMP using a couple of solid examples, extra focus is given to thread-level speculation (TLS), a way to automatically break up nominally sequential applications into parallel threads on a CMP, and transactional memory. This model can greatly simplify manual parallel programming by using hardware - instead of conventional software locks - to enforce atomic code execution of blocks of instructions, a technique that makes parallel coding much less error-prone. Contents: The Case for CMPs / Improving Throughput / Improving Latency Automatically / Improving Latency using Manual Parallel Programming / A Multicore World: The Future of CMPs.
650
0
$a
Electronic circuits.
$3
152863
650
0
$a
Microprocessors.
$3
147303
650
0
$a
Computer architecture.
$3
147304
650
1 4
$a
Electronic Circuits and Systems.
$3
1000149484
650
2 4
$a
Processor Architectures.
$3
1000063915
700
1
$a
Hammond, Lance.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1000149555
700
1
$a
Laudon, James.
$e
author.
$4
aut
$4
http://id.loc.gov/vocabulary/relators/aut
$3
1000149556
710
2
$a
SpringerLink (Online service)
$3
1000143549
773
0
$t
Springer Nature eBook
776
0 8
$i
Printed edition:
$z
9783031005923
776
0 8
$i
Printed edition:
$z
9783031028489
830
0
$a
Synthesis Lectures on Computer Architecture,
$x
1935-3243
$3
1000149526
856
4 0
$u
https://doi.org/10.1007/978-3-031-01720-9
$z
Fulltext (查閱電子書全文)
912
$a
ZDB-2-SXSC
950
$a
Synthesis Collection of Technology (R0) (SpringerNature-85007)
0 筆讀者評論
館藏地:
全部
線上資料庫
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約人數
備註欄
附件
OE0074507
線上資料庫
線上資源
線上電子書
OE
一般(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
建立或儲存個人書籤
書目轉出
取書館別
處理中
...
變更密碼
登入