Project Proposal



Download 24.88 Kb.
Date conversion20.10.2016
Size24.88 Kb.
King Saud University

College of Computer and Information Sciences

Computer Application Department
Project I (Research Seminar)

(CSC596)

3-D Human Modeling and Animation


Project Proposal

Submitted by:

Hanouf Al-Maziad
Submitted to:

Dr.Hassan Mathkour



May 15, 2004

Table of contents



Table of contents 2

1. Introduction 3

2. Related works 4

3. Problem definition 6

References 7

1. Introduction

For many years, modeling and animation of the human bodies has been an important research goal in computer graphics. The human face presents a greatest challenge in modeling and animation due to its complex articulation and communicative importance in expressing the human language and emotions. We can say that the face is the source of human emotions and expressions.

The modeling and animation of human faces has been one of the major research fields in human modeling and animation. A lot of studies have been done on it. Actually it has been an active topic of research in Computer Graphics since the 1970’s. With time and during its evolution, it becomes more and more feasible to use complex simulation techniques.

On the past, most of the studies were focused on achieving an approximate facial emotions and expressions by deformation of the skin and taking no care of the underlying layers. This was leading to produce non-realistic facial modeling and animation. Actually, this was considered one of the many challenges and problems, if not the most challenging one, facing the researchers in this topic. The main objective in all facial modeling and animation researches was, and still, to propose and come up with a model that is highly realism and captures most of the different natural facial expressions. In order to achieve such a model, the interior facial details such as muscles, bones, and tissues must be taken into account.

2. Related works

A facial human model is created by specifying the appropriate set of parameter values. [16], [17]. In [18] they had designed a model that was based on underlying facial structure. In which they had consider the skeleton, muscles, and the skin. Then most of the researches on facial modeling and animation were focusing on this multi-layered approach. The difference and evolution was in the way of representing and modeling the different interior layers as [22] where he represented the action of muscles by using primary motivators on a non-specific deformable topology of the face. Another interesting way for representing the muscles’ actions was devised in [12] .They proposed a model and simulates its muscles’ actions by using a procedure, called an Abstract Muscle Action procedure (AMA). In [15] the researchers had proposed a method that is based on the B-spline.

Previous works on facial animation [18] didn’t proposed a methods that are considered powerful and robust, instead the synchronization in them were manual. An emotion parameterize is fixes and cannot be changed. [9]

There was several papers reported studies of problems in computer animated speech: an automatic approach [3] to animate speech using speech synthesized by rules; the extra parameters needed to control lips, jaw and facial expression are simply added into the table of parameters needed to control the speech itself. [5]. A well known approach to animate is the traditional keyframing approach. [12] Used a collection of multiple tracks. Yet, the synchronization is still manual, in which it is performed by the animator.

There are basically two different approaches to acquire phonemes and their timings for the lip sync. In the speech-driven method, they are obtained from a recorded speech signal. A method in [2] was proposed to generate such additional expressions for the speech-driven approach fully automatically from the input speech signal.. In the case of the text-driven approach described in [1], the TTS component provides high-level linguistic information such as different types of accents or pauses as well as the type of sentence in addition to the phonetic transcription. This information is used to generate non-verbal speech-related facial expressions.

Deformation of head models according to anthropometric data enables us to simulate human head growth based on statistical measurements that have been collected over decades for populations varying in age, sex, and ethnicity [20], [21]. In comparison to standard geometric operators, modeling by modification of proportions and facial attributes promises greatly eased creation of realistic virtual heads.

In [8], researchers had introduced a versatile construction and deformation method for head models with anatomical structure. Using their anatomical head model and deformation method, they were able to reconstruct human faces from skull data [7]. The produced look of a face can be approximated by predicting and modeling the layers of tissue on the skull [10]. The researchers in [7] have developed facial reconstruction approach that fits their anatomy-based virtual head model to a scanned skull using statistical data on skull / tissue relationships. [11] Had through the light on one of the reconstructing approaches.

To achieve fast real-time rendering, rather coarse triangle meshes have to be used. Visual appearance can be improved by using textures from photographs [29], [24] and exploiting the capabilities of current graphics boards for simulation of skin structure and reflection properties this is what [8] is all about. Additional rendering speed-up can be obtained in different ways [6], [4], [23].

Motions are described as a pair of numeric tuples which identify the initial frame, final frame, and interpolation. Pearce et al. [19] introduced a small set of keywords to extend the Parke model

In relate to anatomically-based models, all aspects are introduced by [26]

There were number of facial techniques available [27]. Main resource concerning with Maya [25].

Also there are several resources that tackle and concern specific aspects of computer graphics such as the human modeling and animation topic. [13] is more precise one in which it is specialized in modeling and animation, yet it is concerning the powerful system Maya. It teaches how to create natural human movement of bones, muscles, and skin. Specific modeling and designing techniques are presented, as well as animation controls for organic 3D characters.

3. Problem definition

In the field of 3D Computer Graphics, human modeling advanced tremendously. The reason behind the was the vast number of applications such as: 1- Movie Character generation, 2- Advertisement 3- Computer Games 4- Engineering Design, and a lot more.

One of the most important Applications is Movie character generation. In order to make the character realistic, several issues must be addressed, such as, movement, graphics quality, and emotion expression.

Emotion expression might be the most important issue of them all. A computer character must be able to display human like expressions in order to be convincing and acceptable to viewers. Even Non-Human Computer generated characters must display those human expressions to also be acceptable. Animal Characters in Moves like Lion King, and Finding Nemo, are filled with emotions.

From layers point of view, the anatomy of the human face (skin, muscles, and bone) will be examined and explored. Expressions are revealed by expansion and contraction of deferent face muscles. Generally; expressions are modeled through skin layer points rather than muscle layer points. In this work deferent facial muscle points that control expressions are identified. Combination of those points makes up emotional face expressions. The final product is an array of points that correspond to different emotions with a minimum of three levels for each emotion (e.g. Normal Happy, Happy, and Very Happy).

A well known Computer Graphics Modeling program (Maya) will be used to implement those arrays of muscle points to be applied on a human face model for different emotional expressions. We can say that eventually we will provide the user with a program that allows him/her to apply different emotions on a specific model in a simple and logical way by just specifying the desired emotion and achieving realistic facial emotion expressions.



References

  1. Albrecht, I, Haber, J, Kähler, K, Schröder, M, Seidel, H (2002) "May I talk to you? :-)" - Facial Animation from Text. In Sabine Coquillart, Heung-Yeung Shum, and Shi-Min Hu, editors, Proceedings of the Tenth Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2002), IEEE Computer Society. pp. 77-86




  1. Albrecht, I, Haber, J, Seidel, H (2002) Automatic Generation of Non-Verbal Facial Expressions from Speech. In John Vince and Rae Earnshaw, editors, Advances in Modelling, Animation and Rendering (Proceedings Computer Graphics International 2002). pp. 283-293

  2. Hill, D R, Pearce, A, Wyvill, B (1988) Animating Speech: an Automated Approach Using Speech Synthesised by Rules. The Visual Computer. 3(5)

  3. Jeong, W, Kim, C (2002) direct reconstruction of displaced subdivision surface from unorganized points. Graphical Models. 64(2): 78-93




  1. Lewis, J P, Parke, F I (1987) Automated Lip-synch and Speech Synthesis for Character Animation. Proc.CHI '87 and Graphics Interface '87. pp.143-147




  1. Kähler, K, Haber, J, Seidel, H (2003) Dynamically refining animated triangle meshes for rendering. The Visual Computer. (19)




  1. Kähler, K, Haber, J, Seidel, H (2003) Reanimating the Dead: Reconstruction of Expressive Faces from Skull Data. In Jessica K. Hodgins, editor, ACM Transactions on Graphics (Proceedings of SIGGRAPH 2003). Association of Computing Machinery (ACM). (22)




  1. Kähler, K, Haber, J, Yamauchi, H, Seidel, H (2002) Head shop: Generating animated head models with anatomical structure. In Stephen N. Spencer, editor, Proceedings of the 2002 ACM Siggraph Symposium on Computer Animation, Association of Computing Machinery (ACM). pp.55-64




  1. Kalra, P., A. Mangili, N. Magnenat-Thalmann, and D. Thalmann (1991) SMILE: A Multilayered Facial Animation System. Proc IFIP WG 5.10, Tokyo, Japan (Ed Kunii TL). pp. 189-198

  2. K. T. Taylor (2001) Forensic Art and Illustration. CRC Press LLC.

  3. Mahoney, D P (1997) Making Faces. Computer Graphics World. November

  4. Magnenat-Thalmann, N, Primeau, E, Thalmann, D (1988) Abstract Muscle Action Procedures for Human Face Animation. The Visual Computer. 3(5): 32-39

  5. Maraffi, C (2003). Maya Character Creation: Modeling and Animation Controls. 1st ed. New Riders



  1. Marco, T, Yamauchi, H, Haber, J, Seidel, H (2002) Texturing Faces. In Michael McCool and Wolfgang Stürzlinger, editors, Proceedings Graphics Interface 2002,Canadian Human-Computer Communications Society, A K Peters. pp.89-98

  2. Nahas, M, Huitric, H, Saintourens, M (1988) Animation of a B-spline Figure. The Visual Computer. 3(5): 272-276

  3. Parke, FI (1975) A Model for Human Faces that allows Speech Synchronized Animation. Computers and Graphics, pergamon Press. 1(1): 1-4

  4. Parke, FI (1982) Parameterized Models for Facial Animation. IEEE Computer Graphics and Applications, 2(9): 61-68

  5. Pearce, A, Wyvill, B, Wyvill, G ,Hill, D (1986) Speech and expression: a Computer Solution to Face Animation. Proceeding of Graphics Interface '86. pp. 136-140.

  6. Platt, S, Badler, N (1981) Animating Facial Expressions. Proc. SIGGRAPH’81. 15(3): 245-252

  7. Rhine, J S, Campbell, H R (1980) Thickness of Facial Tissues in American Blacks.
    Journal of Forensic Sciences. 25(4): 847-858

  8. Rhine, J S, Moore, C E (1984) Tables of facial tissue thickness of American Caucasoid in forensic anthropology. Maxwell Museum Technical Series, (1)

  9. Waters, K (1987) a Muscle Model for Animating Three-Dimensional Facial Expression. Computer Graphics, Proceedings SIGGRAPH '87. 21(4): 17-24.

  10. Won-Ki Jeong, Kolja Kähler, Jörg Haber, and Hans-Peter Seidel (2002) Automatic Generation of Subdivision Surface Head Models from Point Cloud Data. In Michael McCool and Wolfgang Stürzlinger, editors, Proceedings Graphics Interface 2002, Canadian Human-Computer Communications Society. pp. 181-188

  11. Yamauchi, H, Haber, J, Seidel, H P (2003) Image Restoration using Multi-resolution Texture Synthesis and Image In-painting. Proceedings of Computer Graphics International (CGI 2003), IEEE Computer Society.

  12. http://www.alias.com/eng/index_flash.shtml (alias - Maya)

  13. http://www.nlm.nih.gov/research/visible/visible_human.html (medicine-visible human project)

  14. http://www.dipaola.org/stanford/facial/class8notes.html (Facial modeling techniques)


The database is protected by copyright ©ininet.org 2016
send message

    Main page