Hoshi Postmortem

From ETC Public Wiki
Jump to: navigation, search

Introduction and Overview

Lyra Studio is the Fall 2017 animation project at Carnegie Mellon University’s Entertainment Technology Center. Following the tradition of the past semesters, the project had two separate deliverables: an animation production package to be completed this semester using projection mapping technology, and a pre-production package that we would hand off to a future semester team. For our production package “Hoshi!”, we utilized the strengths of the projection mapping medium. Our goal was to produce an engaging narrative by blending spatial augmented reality with the use of real objects on a tabletop setting. Alongside our production efforts, we also developed a pre-production package dealing with the Egyptian myth “The Land of the Dead.” As part of this package, we were tasked with developing a storyline, deciding on an artistic direction, and creating preliminary art assets.


Lyra Studio consisted of five team members: Justin Campbell (producer and writer), Prasanth Raviraman (3D/Rigging Artist), Jenny Liu (3D Artist and Animator), Camille Ramseur (Programmer/Technical Artist), and Emre Findik (Sound Designer and Programmer).

As a team we benefited greatly from each other’s skills and abilities. We were able to balance each other with our strengths and weaknesses, and as such we were able to produce good work. This was not without challenges. Being part of the animation project meant that there would be animations to handle, and we as a team had very little experience dealing with that style of storytelling. Still, there were many lessons learned, and our team was dedicated. We worked within our constraints in order to produce a successful deliverable.

What Went Well

Scope

The purpose of this project was to create an engaging experience using projection mapping. The art style, audio, and the technology used all had a big part to play and were all opportunities to get lost in the development of our experience. However, as a team we stayed within our means. We stayed within scope knowing that the experience we delivered was going to have to be simple.


Many things were done in order for our project to stay in scope. One was the use of origami characters for the animation. The animations will be mentioned in more detail, but having to animate paper characters versus more detailed animations such as humans truly helped our project stay in scope. The other was a simple story. To be honest, the story did not start out so simple, but as the semester progressed simplicity was in our favor. Animations take time, and any big elaborate stories would have been near impossible to complete by the end of this semester considering the lack of animators on our team. It was a huge benefit to have a simple story of discovery and friendship with simple interactions. As the semester progressed we had less and less time, and even this simple story got simpler, but to have simple origami characters to animate and a simple story allowed us to cross the finish line of the semester.

Animation

The major challenge of the animation was the quantity that we had to do in such a short time with few animators. The story and theming that we chose for our animation greatly simplified this challenge. The origami characters didn’t have facial features to animate, and their anatomy was relatively simple. The decision to make the dragon act like a dog also helped with animation, as this gave us real life references. Since the story continually changed with playtesting, even after softs, we weren’t able to make the animations as polished as desired. This also meant that we had to create movement cycle animations and then script the character movements in Unity, rather than animating every step of movement in Maya. This made the transitions less smooth, but allowed us to more easily make changes to the story.

Though we made some concessions in regards to animation quality, we still managed to create a final piece with animations that were appealing and conveyed the correct emotions.

Render Farm

Didn’t need it. Our experience was one that we prepared in real-time. Our experience is meant to be a portable installation. Things will move, and having real-time rendering meant we would be allowed to tweak and make small changes to the environment of the experience. This also meant that we didn’t have to waste long hours of render time.

Sound

We received a lot of positive feedback regarding our sound positioning. Unity’s Quad sound output, which apparently is a relatively new addition, didn’t work right out of the box and we had to write a script to manipulate sound settings for it to work. Plus, we needed to tweak speaker volumes and the actual sound clips a bit to trick the positioning. Setting up the system also took some time - we had to acquire speakers of the right size, a sound card to handle surround sound, and the physical equipment to install them.


Paper sounds were tedious to put together, but worked well for our audience - it was a crucial part in bringing our paper characters to life. The process involved several different takes for all the different sounds, then a long iterative process where the balances of the different sounds were tweaked to sound more coherent and true to the characters the paper sounds were on. As mentioned in our final presentation, we paid special attention for the bunny to feel tender and crisp, and the dragon to feel crude and massive.


Our experience included short music passages, and this design choice worked really well for us. The cues we had were successful in conveying emotion throughout the piece. The violin section placed at the peak of the emotional arc of the experience, perhaps, got the best reaction from our audience as almost everyone sympathized with the sorrow of the dragon at that moment. This section was longer, more expressive and more melodic in contrast to the minimalist cues throughout the 90 second-long piece that came before it by design, and we see it as a large factor in why it worked.


Problems Faced

Having to complete Two Pre-Production Packages

This semester we had three beasts to handle instead of two: pre-production for a future semester, handle our own pre-production for our production, and the production.


It is challenging enough to complete a single pre-production package in a semester, but due to the limitations of animator skills on our team, we were not given the previous semester’s pre-production package. That meant more time needed to be set aside for the development of our projection mapping production. We were given the task of not only creating a pre-production package for a future team, but we also had to create, from scratch our own story for our projection mapping project, create a story, and produce it. Developing a story takes time, and it took more time than desired, but knowing of the limitations of our team allowed us to work within our constraints. Still, this meant we were not able to truly start our production till close to the midpoint of the semester. Therefore, time that could have been spent on refining the experience with a set story was not available.

Animation

The one tricky part about animation is that we had to make the models behave like paper while they move and we also had to create a sequence where the paper has to transform into a bunny. This part of the animation caused us a bit of a delay as we tried to figure out the best way to do it.  First, we tried accomplish using vertex animations. We created a flat plane model in Maya, with edges at every crease that would be in the finished origami model in real life. Then we put a joint onto every vertex and created an animation by keyframing the joints. Since there were so many vertices to control individually, this process was very tedious and made it hard to add character into the animation. Additionally, there was the problem that Unity needs special plugins to import vertex animations. We decided to research other methods of animating this transformation, and found out that Unity doesn’t support blendshapes either. We then tried a different approach using multiple rigged models to simulate folding. There were nine folds in the paper to create a bunny so we created a paper model with faces exactly aligned as folds in a real paper. After that we created 18 duplicates of the mesh and built 18 different skeletons, two for each fold. One skeleton to do the folding and the other to animate after folding to give life to transformation. After completing the rigs and animations we placed all the models in one place and changed the visibility parameters so that only one model is visible at any given point of time. With all the animations in a sequence the folding and transformations worked out well, and was able to export to Unity without any problems.

Programming and Technology

Our programming progress usually went at a slow pace as we kept learning new things along the way with respect to how to make animations look good in Unity and how to make projection mapping work. This slow pace contributed to much of what we cut from our original set of ideas about halfway through the semester for the sake of scope. Examples of what we cut include a smoke or fire effect on the table and on the book, simulating water in the cup, and shader refinement. We do, however, have a stable and working product and we achieved all the goals in this area after we had finalized our scope.

Lessons Learned

We learned that there are many things to consider when exporting animations from Maya to Unity, as Unity doesn’t support all of the animation methods that Maya does. Before spending too much time on an animation, you should always test to see if it exports correctly.


Everything you add is part of the story, and it has a place and a purpose. Everything that becomes part of the experience needs to be justifiable. For example, the setting of the desk, and as we would play test we would move the objects closer to the experience. We realized that those simple changes made a world of difference in understanding the overall experience, in the enhancing of the experience, in the taking away of a guest overall perception of the experience.

Conclusion

As of right now, “Hoshi!” our projection mapping experience, is ready for viewing. It is a portable installation using projection mapping with physical props that will be exhibited in the Entertainment Technology Center. We were able to deliver on our promise of making a portable installation. Also, we were able to prepare a pre-production package for a future semester team. 


We produced good work. Again, this was not without challenges, but we had a very collaborative team and working within our constraints. We wish we could have had more time to develop the story further, but considering all the limitations with time, we are extremely excited for what we were able to accomplish and create.