Bilinear spatiotemporal basis models

Author:

Akhter Ijaz1,Simon Tomas2,Khan Sohaib3,Matthews Iain4,Sheikh Yaser5

Affiliation:

1. LUMS School of Science and Engineering and Disney Research Pittsburgh, PA

2. Carnegie Mellon University and Disney Research Pittsburgh, PA

3. LUMS School of Science and Engineering, Lahore, Pakistan

4. Disney Research Pittsburgh and Carnegie Mellon University, PA

5. Carnegie Mellon University

Abstract

A variety of dynamic objects, such as faces, bodies, and cloth, are represented in computer graphics as a collection of moving spatial landmarks. Spatiotemporal data is inherent in a number of graphics applications including animation, simulation, and object and camera tracking. The principal modes of variation in the spatial geometry of objects are typically modeled using dimensionality reduction techniques, while concurrently, trajectory representations like splines and autoregressive models are widely used to exploit the temporal regularity of deformation. In this article, we present the bilinear spatiotemporal basis as a model that simultaneously exploits spatial and temporal regularity while maintaining the ability to generalize well to new sequences. This factorization allows the use of analytical, predefined functions to represent temporal variation (e.g., B-Splines or the Discrete Cosine Transform) resulting in efficient model representation and estimation. The model can be interpreted as representing the data as a linear combination of spatiotemporal sequences consisting of shape modes oscillating over time at key frequencies. We apply the bilinear model to natural spatiotemporal phenomena, including face, body, and cloth motion data, and compare it in terms of compaction, generalization ability, predictive precision, and efficiency to existing models. We demonstrate the application of the model to a number of graphics tasks including labeling, gap-filling, denoising, and motion touch-up.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Reference58 articles.

1. Akhter I. Sheikh Y. Khan S. and Kanade T. 2008. Nonrigid structure from motion in trajectory space. In Advances in Neural Information Processing Systems. Akhter I. Sheikh Y. Khan S. and Kanade T. 2008. Nonrigid structure from motion in trajectory space. In Advances in Neural Information Processing Systems.

2. Akhter I. Sheikh Y. Khan S. and Kanade T. 2010. Trajectory space: A dual representation for nonrigid structure from motion. IEEE Trans. Pattern Anal. Mach. Intell. 10.1109/TPAMI.2010.201 Akhter I. Sheikh Y. Khan S. and Kanade T. 2010. Trajectory space: A dual representation for nonrigid structure from motion. IEEE Trans. Pattern Anal. Mach. Intell. 10.1109/TPAMI.2010.201

3. SCAPE

4. Compression of motion capture databases

Cited by 84 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. A spatiotemporal motion prediction network based on multi-level feature disentanglement;Image and Vision Computing;2024-06

2. Directed Graph-based Refinement of Three-dimensional Human Motion Data Using Spatial-temporal Information;International Journal of Precision Engineering and Manufacturing-Smart Technology;2024-01-01

3. Spatiotemporal Data Modeling Based on XML;Advances in Systems Analysis, Software Engineering, and High Performance Computing;2023-12-15

4. Analysis of linear and bilinear spatial temporal models in the case of missing observations;Communications in Statistics - Simulation and Computation;2023-12-13

5. Spatiotemporal Consistency Learning From Momentum Cues for Human Motion Prediction;IEEE Transactions on Circuits and Systems for Video Technology;2023-09

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3