Language:
繁體中文
English
日文
說明(常見問題)
南開科技大學
圖書館首頁
編目中圖書申請
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Techniques for vision-based human-co...
~
Corso, Jason J.
Techniques for vision-based human-computer interaction.
紀錄類型:
書目-電子資源 : 單行本
正題名/作者:
Techniques for vision-based human-computer interaction./
作者:
Corso, Jason J.
面頁冊數:
151 p.
附註:
Source: Dissertation Abstracts International, Volume: 66-12, Section: B, page: 6719.
Contained By:
Dissertation Abstracts International66-12B.
標題:
Computer Science. -
電子資源:
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3197132
ISBN:
9780542429422
Techniques for vision-based human-computer interaction.
Corso, Jason J.
Techniques for vision-based human-computer interaction.
- 151 p.
Source: Dissertation Abstracts International, Volume: 66-12, Section: B, page: 6719.
Thesis (Ph.D.)--The Johns Hopkins University, 2006.
With the ubiquity of powerful, mobile computers and rapid advances in sensing and robot technologies, there exists a great potential for creating advanced, intelligent computing environments. We investigate techniques for integrating passive, vision-based sensing into such environments, which include both conventional interfaces and large-scale environments. We propose a new methodology for vision-based human-computer interaction called the Visual Interaction Cues (VICs) paradigm. VICs fundamentally relies on a shared perceptual space between the user and computer using monocular and stereoscopic video. In this space, we represent each interface component as a localized region in the image(s). By providing a clearly defined interaction locale, it is not necessary to visually track the user. Rather we model interaction as an expected stream of visual cues corresponding to a gesture. Example interaction cues are motion as when the finger moves to press a push-button, and 3D hand posture for a communicative gesture like a letter in sign language. We explore both procedurally defined parsers of the low-level visual cues and learning-based techniques from machine learning (e.g. neural networks) for the cue parsing.
ISBN: 9780542429422Subjects--Topical Terms:
1000005419
Computer Science.
Techniques for vision-based human-computer interaction.
LDR
:03150nmm 2200301 4500
001
1000004763
005
20061114130257.5
008
061114s2006 eng d
020
$a
9780542429422
035
$a
(UnM)AAI3197132
035
$a
AAI3197132
040
$a
UnM
$c
UnM{me_controlnum}
100
1
$a
Corso, Jason J.
$3
1000005848
245
1 0
$a
Techniques for vision-based human-computer interaction.
300
$a
151 p.
500
$a
Source: Dissertation Abstracts International, Volume: 66-12, Section: B, page: 6719.
500
$a
Adviser: Gregory D. Hager.
502
$a
Thesis (Ph.D.)--The Johns Hopkins University, 2006.
520
$a
With the ubiquity of powerful, mobile computers and rapid advances in sensing and robot technologies, there exists a great potential for creating advanced, intelligent computing environments. We investigate techniques for integrating passive, vision-based sensing into such environments, which include both conventional interfaces and large-scale environments. We propose a new methodology for vision-based human-computer interaction called the Visual Interaction Cues (VICs) paradigm. VICs fundamentally relies on a shared perceptual space between the user and computer using monocular and stereoscopic video. In this space, we represent each interface component as a localized region in the image(s). By providing a clearly defined interaction locale, it is not necessary to visually track the user. Rather we model interaction as an expected stream of visual cues corresponding to a gesture. Example interaction cues are motion as when the finger moves to press a push-button, and 3D hand posture for a communicative gesture like a letter in sign language. We explore both procedurally defined parsers of the low-level visual cues and learning-based techniques from machine learning (e.g. neural networks) for the cue parsing.
520
$a
Individual gestures are analogous to a language with only words and no grammar. We have constructed a high-level language model that integrates a set of low-level gestures into a single, coherent probabilistic framework. In the language model, every low-level gesture is called a gesture word. We build a probabilistic graphical model with each node being a gesture word, and use an unsupervised learning technique to train the gesture-language model. Then, a complete action is a sequence of these words through the graph and is called a gesture sentence.
520
$a
We are especially interested in building mobile interactive systems in large-scale, unknown environments. We study the associated where am I problem: the mobile system must be able to map the environment and localize itself in the environment using the video imagery. Under the VICs paradigm, we can solve the interaction problem using local geometry without requiring a complete metric map of the environment. (Abstract shortened by UMI.)
590
$a
School code: 0098.
650
4
$a
Computer Science.
$3
1000005419
650
4
$a
Artificial Intelligence.
$3
165300
690
$a
0984
690
$a
0800
710
2 0
$a
The Johns Hopkins University.
$3
1000005651
773
0
$t
Dissertation Abstracts International
$g
66-12B.
790
1 0
$a
Hager, Gregory D.,
$e
advisor
790
$a
0098
791
$a
Ph.D.
792
$a
2006
856
4 0
$u
http://pqdd.sinica.edu.tw/twdaoapp/servlet/advanced?query=3197132
$z
0 筆讀者評論
館藏地:
全部
線上資料庫
出版年:
卷號:
館藏
1 筆 • 頁數 1 •
1
條碼號
典藏地名稱
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約人數
備註欄
附件
OE0000738
線上資料庫
線上資源
線上電子書
OE
一般(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
評論
新增評論
分享你的心得
建立或儲存個人書籤
書目轉出
取書館別
處理中
...
變更密碼
登入