Hello hello hello everyone! Welcome back to another Month in Review. Instead of saying “wow, I feel like I did the last one yesterday”, which I totally feel. I am going to say, it’s been quite a month of progress, exciting and unexpected developments, and plenty of work I can visibly show you! So stay tuned, and we can cover every last bit!
Accounting for Genome Studios
We finally began the process of onboarding with the accountants! Got to have the icebreaker meeting, get an idea of how the process works, what we should be doing, and how we should look at our work through the lens of the government tax programs. It was relieving to get deeper details on the whole thing, and reinforced what I had originally understood about the process.
We have our next appointment lined up next week! Very excited to get into the guts of the stuff. Actually articulate the work that we’ve done over the past year and see where that lands us. It’s definitely incredibly gainful that we’re developing bleeding-edge technology in an open-source software that serves to expose complicated game development tooling and systems to others. Doesn’t hurt our business by it either, haha.
So, back to the Documentation Crucible.
I left y’all off with me working on our first official pass at our GS_Play documentation. Broke through and finished it! Yesss.
It was a gruelling process. Quite dry, and gets more and more straining as you’re left with all the things you’ve put off for later.
This was an important personal test, though. A lot of us indies, self-made bootstrappers, hit a point in development, in production, where the work becomes unfun, the effects of the work become microscopic, and the total project, feature, or detail already feels plenty done. Time and time again, you’ll see someone stop there, share their work, take a break to decompress, and then find themselves with a massive project of “almost theres”. This is the widely talked about “Final 90%” after the initial 90%. It’s joked about, theorized, but in lived experience, there is only a very finite number of people who actually can identify it, understand what needs to be done to accomplish that final 90%, and then execute on it.
I would like to say that I think I did a pretty good job! After a long while of historically big work that I could never even imagine being at 90% on, let alone polish and battle my way through the next, it was very validating to see myself stick to it and work out every detail I could come up with.
I did cover some of the many considerations around it: Organization of information, styling for legibility and appeal, substance of the content… But it ends up getting deeeeeep.
I finally got to the point where I could no longer do broad stroke things, and instead, needed to parse through 200~ pages, page by page. I needed to identify every systemic pattern I could conceive as simple enough to visualize in a flow chart, (intuitive patterns being one of the selling features of the framework); I needed to identify every place I could put a Script Canvas node screenshot to show scripted usage of the component or feature; I had to find every link that wasn’t a button, every section that needed a link…
I have a small passion around ease of information acquisition, or in other terms, navigation, in documentation. To that end, I have nearly every page linking from the easy section to the API section, or the API section linking to the easy section. These are called, in business terms, “funnels”. They are usually calls to action that ferry a user through the flute of the funnel to a precisely targeted place. In this case, it’s taking anyone randomly stumbling into the documentation, never certain it’ll be at the beginning, basic, or intro sections, into the place they were originally seeking to arrive at. With a codebase and featureset so large, and only going to get significantly larger, these are really important elements to the total documentation delivery.
User experience is king.
Exactly like with the funnels, I scrutinized the organization of data. I had identified pages of certain depths needing to cover a certain bracket of the information, while never oversharing, and instead pointing to the finer detailed pages down the hierarchy. (T1, T2, T3 pages.) Great on its own, but then you had to ensure EVERY page that is at that level of the documentation matches the exactly defined formula, has all the proper data, funnels to the proper places… It gets complicated. Eventually, you arrive at the edge cases of that, where you’re nesting umbrella pages under precise pages, with their own precise pages inside. It gets dizzying from an organizer’s point of view. I can’t imagine how it’ll impact the end-user trying to digest any of this from a blank start.
To that ends, yet another fine detail was that I piggybacked on certain site layout elements, which otherwise already have their purpose, in order to put another call to action in the top corner of EVERY documentation page. In bright white headings, larger than anything in that visual area: “Need Help?” Get Support!, “Found Mistakes? Docs Unclear?” Let us know! Every step of the way should shepherd the user to what they need. Even support or sharing their frustrations.
You’re in control.
Yes, you’re in control. Like a videogame, or any product anyone ever makes, you are the one designing it, nothing that went into the design of a… mouse, happened by magical happenstance… divine intervention. This then puts the onus on you. The experience you craft and deliver is up to you to design. Meaning, if you don’t put a supportive call to action that’s on EVERY page, in an attempt to alleviate and clear up uncertainty in the end-user… then you didn’t, and it won’t.
So, right below that portion of the layout is a “Print entire section” button. Unbeknownst to me until the final 90%, when I clicked it out of random curiosity, was that it literally sets up a print of the page, and ALL of its descendants. This means that if you go to the overview page of the entire docs, you can literally get a single monolithic doc of EVERYTHING. (Apparently 1141, very poorly formatted pages. Yikes!)
Okay, now that’s a bit crazy..
Yeah. It’s a tangled mess. Certainly oriented towards more focused page clusters, but I digress. Guess what I immediately wondered. “Wait, how do you print an actual single doc page?”. After some quizzical wondering, I just did the standard Ctrl+P print command. Totally fine, that’s exactly what it does. BUT.
BUT.
BUT. That moment of pause, to wonder if that level of functionality is inherent in the implementation, was damning. Clear it up instantly. Shut up, just do it.
So I went in, copy pasted the exact same line, and added the “Print page” button. Y’know what it does? Calls the Ctrl+P print command.
The original link also looked terribly aligned for whatever reason, so I shored that up, which was the original purpose of going over there.
But this is a prime example of the things you miss, and the things you just kinda call off as a priority when you’re satisfied with the first 90%. That well goes so deep, you just need to keep grilling it.
So, to pat myself on the back…
I think I tackled so many little things like that. It may not be 90% + 90% perfection, but it’s a great start to the foundation of the documentation. Things that I need to proceed are more features and refactors to drive continued expansion of the documentation, and those end-users poring over every page and paragraph. That playtesting is essential in any product development process, and it’s just as relevant here.
Soon.
So I capped it off as a really good try at Documentation 1.0.0.
Yesssss. I can finally be free. I can do fun things!
So, to the ends of working out more Del Lago Layover porting details, I tried to get after the Unity to O3DE converter tool I was poking around at last month… oh jeez. Actually, two months ago…
I got some good headway, but it also came up poorer than I’d hoped.
I broke out a bunch of lingering components that were JUST not quite there. Rigidbodies and colliders. I got the system to properly detect multiple materials and feed those references into the O3DE side. Really solid.
But then I hit some lame walls.
First, purchased Assets are garbage.
Ahaha, kinda, sorta… They are mega useful, some can be pretty clean, but every one has their own methods for creation, their own methods for bringing into certain engines.. It’s a tangled mess of workarounds and manual labour… something I personally always want to get rid of through workflows and automation… but to each their own.
Due to this though, there’s really only so much you can automate away. I started generating companion files for every model, “.assetinfo” files that define custom import settings for fbx files. You can do things like zero out the scale and position, which does a number on a lot of models. You can evaluate if the fbx asset is in a different axis, which leaves it lying on its face in editor, and put a rotation on it… There are some good things there. But some things just needed to be manually handled.
Another wall? A bug that needs to be a feature.
Maybe not a bug, but it’s a mystery…
In O3DE prefab files, when you change component details or whatever.. It describes out the details in JSON: Component ID, type name, “multiplier”: 10… etc.
In the references to specific files, you often have the “So_And_So_Folder/Next_Folder/asset”, but there’s also an entry that’s “AssetHint”, where you put the asset name and folder you believe it to be in (or designed explicitly to be in), and the asset system sorts it out for you.
Here’s the “bug”: The place in the codebase where that asset hint is processed, at least in the asset importation process, doesn’t actually do anything. It’s blank. Yet, that’s how I’ve been linking the materials to an asset. Because we can’t know what the UUID of the asset is due to creating these prefabs and things before they ever enter O3DE, we use the hint to point to the asset by name and location, rather than by specific identifier…
Well, by right of it being something we don’t understand… it works great for the “default material”, however, when entering the same data into the “Slot0”, “Slot1”, etc.. of the material component, they get wiped/or ignored. This put a major damper on that facet of the work, because the tool actually links n number of materials fairly successfully; however, they just don’t follow through on O3DE.
Spooky, oooeee ooo oooo…
Next.. I wanted to touch on the PROJECT THAT SHALL NOT BE NAMED.
The project Shauna has been working on has been paving ahead, still going strong. While I still can’t talk details, I think it’s a great opportunity to talk about that 90% thing again.
Oh no.
Oh Yes!
Throughout the development process of this project, I’ve been seeing her spec out features and needs for the elements we had planned originally. She has been implementing work that gets the job done, but then scrutinizing what is actually going on, and if that is, in fact, what we want out of it.
This has driven a handful of refactors to features that have led to a far more precise, but functionality-rich, set of features. It’s really cool to, again, see something that was doing great, be brought to a deeper level of great. As a wonderful side effect, this has also allowed her to simplify the codebase, modularize common functions into a central place, and otherwise stabilize the functionality a great deal. Things I love very much, considering we’re in the business of doing strong, stable products, over spaghetti code desperate, frantic game dev.
Be more like Shauna whenever you can.
The better it gets, the more excited I am for our big launch! Steady. Steady.
Back on the Gaian Madness Train.
I was still smouldering from pressing on the documentation stuff. Not only is it dry, working on it so long makes you lose track of the momentum you had with the fun things that move the toolset and your ability to do gameplay forward.
So I was going pretty indulgent. Just stumble around and poke at stuff I feel immediately drawn to.
And what was I drawn to?
Graph Canvas…
Again.. The akjhsfkabf graph canvasses!!!!
Like… objectively, it would empower a lot of people if we could make sense of it. It would empower us… but I also just wanted to get it working out of spite…
There have been a handful of features we sought to either have, get made, or make ourselves, and the graph system was one of the most immediate. Dialogue authoring with nodes and flow lines is so much nicer than any manually authored solution. I was making a large part of all the surrounding functionality this far, but it pales to how much easier it is to drag things around, visualize the trajectories of conversations, and just immediately punch out the details of the conversation, not: “This node connects to id 90735, 34792, and 94392.”
GRAPHS. GRAAAAPHS.
Well, I chose to poke at it. Shauna made some great headway in January, and I thought maybe I could jostle something loose.
And boy did something shake loose.
Like an interrupted sneeze, we finally broke through on critical base things like registering nodes from ‘anywhere’, and properly populating them on the palette. Then, came launching windows to “sub tools”, as in… Whatever custom graph tool you are making. Making it so the registered nodes don’t appear on any graph they are not supposed to.
Then things started speeding up. We want to be able to toggle flow lines on and off, which changes the purpose of a graphs functionality. What about toggling variables support? Sure. Should there be mandatory nodes in the graph? What if I want to put custom information on the face of the node?
All things that were already part of the graph systems. But for this, we were trying to make them just simple settings, you plug in, set your terms, and out pops a graph tool.
So guess what?
WE HAVE A DIALOGUE EDITOR!!!
The dialogue editor is a child of that system. Toggling different pieces of that functionality to get the configuration we want. I converted my “nodes” into actual nodes that register and are on the palette…
You can toggle on and off the ability to loop the flow lines, editing the node faces is HTML you can program to output from the node itself… Saving and loading happens fairly automatically… Graphs can be instantiated and executed, to actually affect gameplay and runtime systems based on what is on the graph…
It’s sooo cooool. SOOOO COOOOOOOL.
We plan to make the Graph Tool Framework base open source once we stabilize it, because right now it’s just a mess to get fun things going… But I think it’ll be a very powerful addition for anyone seeking to do complicated systems, made better with the visualization and control of a graph-driven system.
This was a major breakthrough, and I look forward to being able to create other editors around graph stuff in other feature sets.
Finally. After so, so long of trying to get anything to work. With a little chip each time, we got into it and unearthed a lot of functionality. It was quite worrisome that, really, most of the people who worked on any graphing things are not available to help make sense of any of it anymore.
Off to something totally different.
Shauna also put together a small implementation plugin for Leantime: A Gitlab project to Leantime sync plugin.
This is to make the issues tracking in Gitlab affect the task planning in Leantime tied to the same project. It’s a very exciting little utility, as redundant work and split focus across different developers is not a very optimal way to work.
As we further integrate our work into the Leantime workspace, it will drastically improve our ability to be aware of things going on.
It gives me inspiration to wonder what other integrations we can make across our chosen services. There are so many opportunities.
Now, TECHNICALLY.
This is where I was actually wrapping up the documentation. Took a lot of the month still. It was quite the endeavour.
Then, WELL AKTCHUALLY…
I worked on the last bit of the Unity to O3DE import tool.
And wrapped up the Dialogue Editor work… Getting close to the end of the month.
Back to new things!
I did a small side project. Trying to import a big animation pack from a single fbx file into O3DE is agonizing. Many engines are able to parse any number of animations out of a single file, but O3DE does not currently have that feature. This means you need to create one animation per file…
In this case, that’s 270~ files. NO WAY am I doing that manually.
Found a small bulk animation exporter utility to do the job. Sweet.
I’m pumping out these animations, and it’s taking literally like 4 minutes per animation. I guess that’s what happens when you’re using a bulk tool… I worked out some small quality of life features: Being able to select a subset of ALL the animations, and having each export step one frame in order to update a progress bar. I figure that probably would help with the freezing.
Turns out the one issue on the repo, which I only saw as I was letting my computer run through every animation, was saying that despite naming all the files as the target animation, every file was receiving ALL the animations anyways, defeating the purpose.
Oh.. I see. I was exporting 270, 270 animation files.
Okay, yeah, that’s not okay. No wonder Blender was chugging. I would have taken like 4 hours to export all of those.
Fixed that up, and exporting all the animations took maybe 3 minutes. Phew.
Really helpful tool now, I definitely plan to hold it in the back pocket for any other time I need some bulk exporting like that.
Introducing a Development Mannequin!
This was all surrounding a mannequin set by Quaternius: The Universal Animation Library, and Universal Animation Library 2. Seeing the utility of this, I approached Quaternius to see if we could include it as a 3rd party gem for O3DE, in order to support developers with a uniform common mannequin for development, with a handful of animations.
Got the go-ahead to use the Free tier assets from both, giving us a Masc and Femme mannequin. And around 75~ animations to work with. Many being nice common motions and actions people can use for common gameplay needs.
If you’re interested in 270 animations for your own needs, I definitely recommend them. They even have base human models and modular fantasy equipment.
I look forward to seeing it in action and how users will use it in their prototyping and experimentation!
This brings us to the finale of the month.
And what a finale it is…
I’ve been sitting on a plan I had broken down months ago, about hijacking O3DE’s Image-Based Lighting Global Illumination system. It uses a cube map (skybox) to determine the general lighting of models by projecting the colours of the skybox onto the faces of the models. From below, around to the horizon, and then up above.
What if, you instead, created a low-resolution cube map on the fly, and gave THAT to the image-based lighting system? And what if, you could determine what colour would be on that cube map?
Well, you’d get a very common method of Global Illumination. Colour, or Gradient, Global Illumination. You decide 3 colours: Low, Mid, and High, and create the cube map by blending those into a generated cube map.
Alright, so what became of it?
GRADIENT GI!
I brought the plan back up and put it through Claude Code, pointing to all the references and source code around the systems and, in literally 2 iterations, got Gradient GI.
HOWEVER, it wasn’t completely that easy. In order to support mobile, this followed the exact specs of what I described above. You generate an image, then give it to the IBL system, it reacts to the new image, destroys the old one, and adds the new image in its place.
This is very heavy and cumbersome, and turns out, does not allow real-time changing that things, like a Day/Night cycle, use to shift the ambient lighting based on time of day. Because it’s always deleting and recreating images, it mostly locks the render system in a “image is deleted” state… But any GI is better than not. And Mobile isn’t thrown under the bus, again, after many of the lighting methods in the engine are focused on AAA development.
Fair. Buuut…
I REALLY wanted real-time Gradient GI changing over time. Like, that’s ESSENTIAL to making a day-night cycle system, and something I absolutely want to create in GS_Play for eventually creating Awaken, Guardian. So I pressed it a bit more.
This version, the “Dynamic” mode, is very similar but has unique details, which are pretty cool.
One of the follies of the Static mode version is the continuous recreation of the cubemap, the destruction of the last, and swap to the next. What would be ideal is to pass in the image and then just edit the image. The rest of the render system wouldn’t care, because it just says “I need colour right here”, and uses whatever the cubemap has…
Alright, well that solution turns out to be an “AttachmentImage”, instead of a “StreamingImage” the static mode uses. Now, we’re not recreating the image, but that means we need something to do the continuous editing. That is handled by a compute shader and a custom pass.
Doing this gives us control to put the image reference where we need it, point the shader to the right image, and then use nice normal shader logic to process colour onto the image. The compute shader, and I believe AttachmentImage, are not compatible with mobile, and thus were the object of contention in the first iteration.
Now, we have both. We do want to support mobile. Buuuut… I have real-time GI colour changing for Day and Night cycling!!! Soooo coooooooooll…
What an exciting month!
Was a really busy, demanding, turbulent, but gainful month. I tackled difficult things, fun things, and random side stuff. I’d say that’s a pretty good month of work.
Now, as all this exciting new stuff settles, I return to doing small work: Tying off loose ends, finalizing small things still needing attention in our tools, and figuring out how we’re going to move forward as a company.
We want to shore up everything we can before we shift into GS_Play Alpha. We’re getting closer and closer to being able to make sweet games, with some really powerful tools to get us there.
Hold your breath! See you next month!
Btw, you can now rep Genome Studios on Discord with our brand new shiny server tag! Every bit counts!
P.S. Did you know you can keep up with our Month in Reviews by signing up for our newsletter? Just check below!
Want to keep track of all Genome Studios news?
Join our newsletter!









