Saturday, October 15, 2011

3dsMax용 군중시뮬레이션 툴 Autodesk Project Geppetto 첫 번째 베타 공개

3dsMax용 군중시뮬레이션 프로그램 Project Geppetto (People Power)의 첫번째 베타버전이 공개되었습니다.

현재 명칭인 '제페토'는 "People Power" 라고 불리는 기술의 한 부분적 개념이며
차후 목표는 세 가지 큰 문제점을 해결하는데 있다고 합니다.

1. 자연스러운 움직임을 쉽게 생성하기(Geppetto)
2. 다양한 인류의 모습을 만들고 문화적 영향을 가능케 하는 것 (Evolver)
3. 수만명의 캐릭터들이 상호작용할 수 있도록 하는 효율적인 프레임웍의 생성




http://labs.autodesk.com/utilities/geppetto/?popupDownload=1
다운로드는 오토데스크 계정이 있어야 합니다.




EVOLVER
Recently Autodesk acquired Evolver.com, and we’ve chosen to make the existing web site available to those of you interested in Project Geppetto. Our long term interest is to pair the Evolver technology with Project Geppetto to efficiently create large, randomly varied visual styles of Geppetto actors. Right now, the Evolver actors will not directly connect to Geppetto because we haven’t had the time to wire it all up. We’re providing you with free access to the Evolver site so that you can begin to give us feedback on how Evolver should interface with Geppetto.

OCEAN OF MOTION

Autodesk has been researching the underlying technology behind Geppetto for over five years. The technique is more sophisticated than simple blending techniques that result in the awkward and implausible motions used in video games. Geppetto is based on a fundamentally new approach to how motion data is processed and applied to characters. Motion data from key frames or motion capture clips are synthesized in such a way that variations of the original performances can be interactively applied with a high degree of quality. The process of working with the data is akin to training your characters with performance repertoires. We’re calling this collection of motion that gets processed an "Ocean of Motion" to represent how different our approach is.
Importantly, the approach we’ve taken is not specific to human motions or to crowds. Given the right data, Geppetto could control dogs, snakes, dragons, cars, etc, and with a more directorial itinerary-based interface, Geppetto could be used to block in individual "hero" animation. Because the technology is data-driven its capabilities are limited only by the amount and kind of motion data it has access to.
Geppetto technology can currently solve some of the following problems:
  • Path following: Real-time steering of physically correct walking and running motion that follows a specified path.
  • Agile responses: Real-time triggering of physically believable agile motions, such as quick turns. This is required for collision avoidance and navigation.
  • Object interaction: Seamless and natural real-time interaction with objects in the environment, such as sitting in chairs, or stepping up to climb stairs
  • Intelligent "human-like" dynamic obstacle avoidance: The perception of potential collisions and subsequent evasive actions must mimic the response times and behavior of real human beings.
  • Intuitive crowd orchestration: New methods will be introduced that allow artists to directly control the flow and interaction of traffic patterns. Ease of use plays a major role in the design of the intended workflow. Characters are orchestrated in an intuitive high-level fashion through the manipulation of flow patterns, goals, and designated behaviors. The tools are geared toward controls that are fun to use, and accessible to "non-animators", but not at the expense of serious artistic control.

LIMITATIONS

Given the enormous range and intelligence of how humans move in crowds, a guiding principle of the Geppetto development goals is to gradually increase the depth of complexity from a framework that begins with a restricted set of behaviors. The Project Geppetto technology preview is intentionally bounded in scope in order to understand, test, and assess the perceptual "plausibility" of results with relatively simple, manageable crowd scenarios before adding in more complicated motions, skills, and interactions. There are some major limitations in this first manifestation of Geppetto, those details can be found in the User Guide. You’re being asked to evaluate a subset of what we know is needed, not so much to tell us that the known limitations are indeed limitations.

AUTODESK LABS GOALS

Our limited goals for this Labs release is to test our early concepts with a broad range of users:
  1. Is the Geppetto motion believable? Did we succeed in creating life-like behavior for a limited range of cases?
  2. Are the methods for setting up and configuring the crowds useful, limiting or too difficult?
  3. Do you think we’re headed in a generally useful direction?
As you can see, things like the representation of the character is not a goal for this initial release because what we have is not what we want. Also, it isn’t a goal to see if you can use this initial release with all terrain, buildings or structures - we know it only works with flat environments.

FUTURE

We can't comment directly on our plans for this research, but we’d like to see this technology become more flexible so that it can be used in more arbitrary ways, whether for analysis or entertainment purposes. Project Geppetto is just a stepping stone to much more interesting applications of this technology. The technology preview executable expires at the end of the technology preview based on a time-bomb date that has been set for July 1, 2012.



Quick Start Tutorial

No comments:

Post a Comment

로그인하지 않아도 댓글을 달 수 있습니다.