There is a lot of work out there about “design patterns” and how to employ those towards keeping code on large projects organized and maintainable. This series of posts focuses on what we’ve learned programming for VR specifically, within the Food Fight framework, and what kinds of foundations we’ve found are useful in this specific area.
What Is A Refactor Anyway?
Food Fight has had an interesting journey as a project. The first iteration was a demo called Opioid Blast which was something of a similar idea of “throwing, destroying, or diverting negative cues.” This was one of the first VR projects from HealthImpact.Studio, and while it didn’t make it past the prototype stage, it informed a lot of the design decisions that went into FoodFight. It was also built quickly using Playmaker, which is fantastic for prototyping, but can also evolve into “spaghetti code” even on relatively small projects.
The first version of Food Fight itself was exploring similar concepts with food, with the main goal being to release an “MVP” or “Minimum Viable Product.” We now have released this first version on Early Access on the Oculus Store, with a version later planned for Steam VR and Playstation.
The process of making a prototype, and laying the groundwork for a larger project, are different. The goal, in that case, is to be able to test the behavior we were interested in and move on to the next part of the game. From a programming standpoint, any ‘custom’ behavior needs to be written as a conditional somewhere. Unity is a great engine for this sort of work because there is a whole lot you can do just by accessing the engine calls directly.
However, as a game grows in complexity, you start to find the limits of this approach. With more programmers joining the project, and with the viability and feedback we’d gotten from conferences and demos like ECGC, it became clear that we needed a refactor to better allow for collaboration, provide an organized set of behaviors to draw from for different parts of gameplay, and to provide a better workflow for adding more content later on.
A refactor is something like ‘spring cleaning’ for a code base. It at first looks like it will just take a long time, and you’ll just end up where you started. But just like spring cleaning, both the process and results are satisfying to work on as it’s now possible to see the code in a much more organized and forward-looking way. The things we’ve learned early on here also can inform the development of future VR games both at HealthImpact.Studio and hopefully for others in the VR field as well. The overview we’re providing here is not going to be code-heavy, but shows the process of thinking through the architecture needed to build out more complex game behaviors. It’s written with Unity in mind, but the same concepts and ideas could work across engines.
First Steps: Stripping the Concept to the Studs
VR itself is a new and exciting field. Obviously, the first thing most would notice is the immersion from the headset. This is certainly a huge step forward for environment design and the opportunity to showcase art and graphics in ways never before possible. From a game design standpoint, the most unique aspect of both the Oculus, Vive, or Playstation VR is the player can use the position-sensing controllers to “physically” do things in the game. There’s so much to do in fact, that we wanted to start as basic as possible.
The core class written in C# for everything that can be picked up and put down is an “InteractableItem”. The role of this class is just to keep track of all the other useful pieces to the game object, like it’s physics components, the InteractableEvents detailed below, and a data class. Besides setting itself up, it has no other role than a ‘hub’ for all the other parts of the Interactable gameplay object.
Alongside the InteractableItem is some ‘scaffolding’ called InteractableEvents. The only role this provides is to notify other parts of the code when the item is picked up, released, destroyed, or when a given button is pushed or released. An extra set of events called CollisionEvents forwards information about collisions for objects that might need that added logic.
Each InteractableItem has a data asset just called ItemData. In Unity, we used a ScriptableObject for this. This is a very useful class for not just data storage, but also providing general bits of logic that can be pulled and run from anywhere in the usual Unity engine. It makes the workflow of setting up different objects data with the same overall code a great deal easier. This allows simple stuff — for example, a food’s splatter colors or an object’s nutritional values — to be managed directly inside the Unity editor.
We took our data asset one step further and gave our data class a list of “InteractableBehaviors.” One frame after an object is loaded and it will load each of these behaviors as a child object. We do this a frame later to avoid “race conditions”, where two different parts of the program could be running side by side, but one is expecting the other on the other being finished. Waiting a frame for the behaviors themselves to load means that everything related to the InteractableItem is already safely loaded and can be relied on by the behavior to be in place.
This concludes our initial overview of the refactoring process. Come back next week to read about programming behaviors, embracing ‘the sandbox’, and what we learned from the refactoring experience.
You can read more about Food Fight on the game website, or download the game for free on the Oculus store.