What a wild and tangential month. Great work all around, whether expected or not.
It was my Birthday!
December is a dense month for celebrations for me. As many December babies know, you end up kinda stacking your birthday with the holidays. This year was nice, little pressure to “do it right”, so we just had some nice food and did much of nothing. One of the better birthdays I’ve had in a while!
We also got a new ice cream machine! I am lactose intolerant and began making homemade ice cream with a hand-me-down bucket ice cream machine. It was heaven, and so we’ve been doing it ever since. This new machine is an appliance that actually freezes on demand, so fancy, but also so nice.
Because of the power it enabled us, we ended up making ice cream for Christmas gifts!
It basically translates to $2.50~ per pint cup, and each batch made 3. With this gifts run, we basically paid off half the machine in savings. And the icecream is reeeeallly good.
Don’t be afraid, it’s so easy to make. Philadelphia Style Icecream Recipe.
Now, about Genome Studios.
Genome Studios got a server computer!
It’s very cool, I’m excited for the autonomy and potential it provides us. All of it within our own control and direction!
Turns out there are so many cool, free and open source, projects out there for self-hosting. It’s incredible what people are able to make, and that there are people who will support and develop it thereafter to the point that some rando like myself can find out about it, evaluate its website and features, and then decide to use it for our completely unrelated goals. Open source is so amazing. The power it has to democratize people’s ability to work and contribute to the greater world is astonishing. The more I discover about it, the more I want to take part. It’s so powerful in giving people a voice and an ability to collaborate and impact others, all for the sole reason of contributing back in thanks for what the software provided for them. I love it.
So, turns out there’s so much to know about laying down server infrastructure. WhO wOulD hAve iMaAAagiNEd??
To start: For a quick preview of the total server specs I will get into greater depth on:
We got a GMKTec Micro PC. The NucBox K11. (Commission Link if you are interested.)
Hardware:
- AMD Ryzen™ 9 8945HS
- 32GB DDR5 5600MHz SODIMM RAM
- 1TB M.2 SSD
- Two 2.5gbps Ethernet Ports
Software:
- Windows 11 Pro Included (Really Helpful)
- Pro allows the use of Hyper-V virtual machines.
- Nginx – Reverse Proxy Service
- Mesh Central – Remote Connect Service
- Next Cloud – File/Cloud Service
- Docmost – Documentation Service
- Open Project – Task Planning Productivity Service
- Gitlab – Repositories, and CI/CL Automation Service
- PenPot – Graphics and Design
Okay… But why get a server?
Builds. At the absolute core of this whole thing, builds. Fun, and cool other things, too. But builds.
We bought this particular machine because building happens on the CPU, and this one is a fairly powerful one. No need for GPU anything, because we’re not developing or gaming on it. So integrated is fine.
O3DE can come in roughly 3 flavours:
1) The O3DF provided O3DE Engine by Installer. The one you download and install.
2) The raw Source Code, built together with the project. (This is the one I’ve been doing aaall throughout 2025)
3) And, a locally built engine that is built as an “install”, which only carries the engine data. The “Install” part means that all the source code has been built and packaged into DLLs and whatever else, so there is no longer raw engine code. This allows you to point your projects to it, exactly as you would by using the engine installer. It does not create an install wizard or anything, literally just the files the wizard puts on your computer. (This is a key detail to this type of build.)
Right, and?
When you build a project, linked to the source code. It’s building the engine, the project, its dependencies, and all that good stuff, all at once. This is great when you’re doing one or two specific projects, because you can look at the source code, build whatever you want, sync your source code to the dev branch, or to exotic feature branches in the O3DE repositories, on demand. It’s fluid, it’s adaptive, and it’s really easy. This is how I set up my “Using vscode with O3DE” tutorial.
When you first build your project, you end up with a massive full rebuild. This is the engine and project from scratch. Close to this degree of commitment happens when you make a major update to the source code from github and have drastic changes within the engine to process. Otherwise, once you’re settled in, building your project is quite minimal. You’re only updating a few files when you build, and thus it takes 1-3 minutes. Easy.
HOWEVER. This is done individually with every project. So, for every additional project, you need to do a full engine rebuild to catch up. Tolerable when you’re solo and have 2 projects. Just run one overnight, or while you’re watching TV in the evening. It’s low impact.
But then you add collaborators. Now YOU are rebuilding the project with your version of the engine, updated to the state that it is, and your collaborators are also doing that. If their hardware is not as good, it takes them longer, and if you happen to be one commit out of sync, when they try to rebuild their project with some of your changes. Say, adding a component to a prefab or what have you. Everything is out of sync. This was causing repeated full rebuilds, necessary to realign everything. Even when none of the changes were to the engine source code. Basically, a tug of war of building and rebuilding to anyone’s given engine state.
We wasted many hours struggling with this problem. Which is why it became apparent what the purpose of that “local install” type engine was for.
If you have one cut of the engine. Built and working. Then you can deploy it to countless projects, provide it to countless people, and you’ll all be in sync; you’ll all only have to build the custom bits of the projects to bring them up to speed. This is order of magnitudes less work to rebuild. You end up with one build cycle that satisfies EVERYONE simultaneously.
Great so let’s do that one!
Yes, but….
Now you need to decide which computer will be the one to build. That computer will be tasked with a “total rebuild” level effort. Locking you out for an hour to 2 hours, depending on your hardware. So it still needs to be strategically built to prevent someone from being locked out for 1/3 of their workday.
Next, how are you going to get it to your team? The engine builds hover around 3.5GB. That takes quite a long time to drag into Google Drive. You could host it as a git repo… One way or another it gets complicated.
So, now it would make a lot of sense to actually formalize what you’re intending to do and actually build out a pipeline to do it.
Alright. What does that look like?
We want to let the builds percolate out of sight, out of mind, and allow everyone to freely work without fail. This definitely points towards having a dedicated machine to do the work and dole out the results.
This could all be done by renting server space with a corp and having them handle all of the hardware and maintenance and stuff, but where’s the fun in that?
Aha, actually, it’s because with a server ready to do one thing… You could get it to do other things, too. Many of those things can be free and open source alternatives to subscription-based services like GitHub/BitBucket, Jira, Confluence, Notion… There are so many production tools that you end up having to pay a premium for, for the pleasure of trying to make video games on a 0 dollar budget.
What if you could remove the dependence on those services and spare yourself the expense? Well, you could look at how much cost you’re “saving” per month, then calculate how long it would take for the local server machine to pay itself off. It’s not half bad if you’re in this for the long haul.
(In CAD, for around 10~ users)
- Bitbucket: $62/mo
- Confluence $40-60/mo
- Jira: $40-60/mo
- Google Drive (Workspace): $9.50/per user ($95/mo)
It gets quite costly… Hovering around $275/mo.
While you may not be relying on all those services right off the start, the free tier of many of them become immediately restricted when you’re either using storage space or adding your 4th~ collaborator. This can simply be from adding a mentor, or introducing a contractor to the mix temporarily; context doesn’t matter.
I am not arguing that you aren’t getting value from these services. Stability, distribution globally, no maintenance on hardware or network… but if you’re interested in sticking it to the man, having autonomy over your work, your tech, and your scalability, you could get free equivalents to all these services, and are only restricted by what you’re capable of setting up, and the cost to get your hardware up to speed.
Speaking of Speed.
I lucked out. RIGHT as I bought this server computer, I got an offer to upgrade my internet to get up to 3gbps up/down. This is with a provider that runs direct lines from you to their infrastructure. No shared connection with your neighbourhood. We got it for more than 50% off for the next 2 years, with some very competitive rates thereafter.
Serendipitous.
This let me rapidly overhaul my local network, and reconfigure the placement of my machines to get this server computer ready to run at the max capacity of the ethernet port: 2.5gbps. This alleviated one of the biggest issues of setting up a home server: how will you get 3 teammates syncing 3.5gb engine updates without locking them behind a 100mbps/3 people connection?
OKAY SO. Theory aside…
I started this month trying to set up some of the immediate services that came to mind. With a server computer, you usually want to use a remote connection to control it. This means you don’t need a dedicated terminal at home, and your engineering team can connect to it and do work with it from anywhere in the world.
Enter Mesh Central.
Windows 95 called, they want their Remote Software back.
This is one of the most raw-looking of the service solutions I found. It works very easily, and so far has been very capable. I am probably a bit naive, as I am on the same LAN, meaning it will be worse in the great wide world, but it gets the job done, is set up incredibly easy, and has been massively beneficial already. 85% of the work I’ve done with this server over this month has been through remote connection. Primo.
Next? File server.
Storage is one of the easier things to provide, as storage capacity is rapidly growing and its price is rapidly sinking. Services like OneDrive, Google Drive, etc, throttle your bandwidth, so you’re never uploading your files at 100% network capacity. You can easily tell because installing Steam games runs at, like, 10x the speed. This is because they want you to be in your game experience asap. It’s in their best interest to make that as pain-free as possible.
So I bounced around a bunch of solutions, but was smitten by this one in particular: Next Cloud. Some of these open source solutions bury their community/free versions deep within their websites, with few links to easily target it. They then put all sorts of links to “Install… the premium” within the free community pages. Next Cloud is brutal for that. Buuuuut, it’s really nice.
Additionally, this was a Linux-only service. I started noticing that MANY of these sorts of tools are predominantly Linux-driven. Turns out.
Thankfully, as I mentioned above, this server box came with Windows 11 PRO, which allows the installation and use of “Hyper-V”. A virtual machine service that allows you to run virtual machines of other OS from your Windows installation. What a relief.
Once I punched through getting the community version and learned how to get a VM started and hosted, things started to get pretty cool. Next Cloud has a gorgeous interface, and I think it’s pretty powerful for my needs. Like any cloud service, you can share folders, make shared links, edit files in-app, etc.
This is all where the “fun” part started coming in. It’s super exciting to have all these new toys to play with. Server machines, Virtual Machines, services running and providing really cool tech. You can configure the file structure, install extensions, create groups and users, configure everything exactly how you want it… It’s all so cool!
But then. Dun dun dunnn…
How do you “HOST” this server? Like. Make a connection to it from the outside world?
Locally, you’re pointing to each IP directly: 192.168.1.37, 87, 44, etc.
It turns out that you only have the one public IP that goes from the world to your home network. So arguably, you can only have 1 service available.
But I mean, there’s all this port forwarding stuff, I do it with games all the time: Different software has different ports and all that stuff. Obviously, you can just point to your local IP + port “183.566.72.191:567” to get to the service on 567 port. Nothing else is listening to that. Right?
You sweet summer child…
Port stuff is only local; there’s no mechanism to actually do that from the internet easily. So what do you do?
You use a “Reverse Proxy”! Rather than sending your connection out into the world and having it connect to something somewhere else, and masking your presence (a proxy). You take “any” connection coming in, and based on where they came from, point them to the single right service.
This was done with yet another service, Nginx NPM. This service waits at the door of your network, captures the connection, then relays it to the right service by local IP, using your earlier designated ports.
How do you determine where the connection came from? By making DNS domain names with your website hosting provider. You have a domain, website, and are hosting it with a webhost… right?
With that, you can make any sort of “service.myawesomestudio.com” pathways that Nginx can read and pass on to the desired service. Matching the URL to the local IP and port.
Suddenly, your server with an IP address going to a lone remote connection service becomes a robust delta for any number of services, all with functional domain names and a lightning-fast connection!
Now things are getting cool.
So we now have remote connections to service the machine, a file server with a nice interface and expansive storage. Everything can have user accounts made to give to your team and manage from the admins…
What other stuff could we do? The world is our oyster!
Who knows, because all this server stuff was off the clock, and I actually need to do GS_Play work. Otherwise, what’s the point of having a server if you don’t have a product?
Oh right, yeah, we’re a game studio… Ehem…
To that ends, I wanted to begin focusing more on actually being able to create the content to build some actual experience in our game project. As always, there are a dozen little bits that you overlook, then have to tackle before you can do anything creative, which is why I began to enter Cinematics. While dialogue is part of that feature-set, cinematic experiences are far more than just the text appearing. Who is doing what, where are they going, and are they triggering things in or out of the cinematic? Many questions that need answers.
Quickly, I wrapped up conditions and triggers in the dialogue, as I only had hard-coded mocks, which meant I couldn’t make any unique functionality.
Then I started nice and small. Stage markers in theatres are the tape X’s on the floor that the performers walk to, stop at, and otherwise navigate around. It’s something absolutely necessary for cinematics that are driven by piloting in-game characters and objects, rather than massive, fully controlled and animated sequences using fully animated copies of everything in the particular area. Got that done. You can place them and pull up the one you want at any given time. Easy.
Next, how do you get a character to its mark? This was a very exciting bit of work: we needed cinematic control over the units in the cinematic. A unit controller, if you will… that’s controlled not by a player or NPC AI, but by a cinematic driver…
This case is EXACTLY what we’re very excited about for the GS_Play gameplay framework. All I needed to do was inherit the unit system’s controller class and make a “CinematicController”. When a major cutscene activates, it possesses the unit, locking out player controls, and then responds to cinematic triggers and cues. When the cinematic ends, it unpossesses the unit, and the player controller re-possesses it. Perfect. This immediately allowed us to get to what it will do, rather than how we will do it at all. These sorts of intuitive patterns and systems are exactly what we’re trying to lock down and refine. It’s a bit harder on the initial authoring end, but once you have the free rein to mix and match freely for a game production, you can rapidly create new or combined functionality you need, right on the spot. Ideally, in a way that allows it to get largely plugged into systems already running and working, so you can just worry about what the game does, not how it can be made. A very exciting and validating moment.
And to that point, cinematic starts, you cannot move the character, it ends, and you’re back in control instantly. Nice.
So to celebrate, we had another wonderful issue of needing to do a complete rebuild again… Getting a lot less funny these days…
Quickly Ran out of steam again. This entire month has been so low energy.
So rather than do nothing, I broke back into working out the server logistics.
Now that we had file hosting and remote connection, it was finally time to tackle the sole purpose of the server: making builds and deploying them.
The steps needed for this process are:
- Detect changes on an “Engine Deployment” bitbucket repository and begin processing.
- Sync the repo to the server computer.
- Run an “Install” type build.
- Measure if it succeeded or failed.
- If successful, copy that built install into a deployment folder.
- That folder can somehow be accessed from the internet.
- On a team member’s computer, somehow detect that the deployment folder changed.
- Download the new install.
- When the editor is closed, replace the install folder you already have with the new downloaded one.
- Tada! In totally easy terms, you got your entire build pipeline figured out. Child’s play.
No way. Not easy at all.
Not easy, but doable. Because it is such a specific system, I started with the assumption that we could make some dedicated scripts that do one thing or another, and then, with each handing off functionality to the next, you have a clean and simple process for this “build -> deployment” pipeline. So I got after it.
First discovery. I set up the scripts to use Ninja Builder, a faster builder than the Windows default. I talked about this in a previous blog.
However, reporting was not what is normal for Ninja. Oh well, it’s an “Install” build, it must be different somehow…
No. This was the standard MSVC builder. I was using a build method described in the O3DE documentation for building engine “installs” using a “Preset” structure. Something I haven’t done before. But that’s just a different thing about building an install build…
No. The presets just do all the build setting and calling automatically, using defaults. So, the “preset” for building the installer was setting the build to use MSVC. After, I broke out to manual build settings, as I’ve always known them to be, and finally enabled Ninja. DRASTIC improvement to build reporting and build speeds; it’s night and day.
All great work, despite the hurdles… and boy were there hurdles.
Accidentally dosed myself with the wrong nighttime meds and was involuntarily cranked for hours. Totally destroyed my energy reserves.
Microsoft is getting pretty controlling with their services. In Windows 11, they removed the ability to make local accounts. Making you LOCKED to making a Microsoft account for every user. This is stupid, because I just wanted a service account for the server. To decouple the admin from the techs who will use it regularly.
They had a very interesting “prove you’re a human” registration system… It would make you do 10 tests of the same kind, which actually took some looking at to verify. Great, done. Except I got one wrong… it shows me what I chose and why it was wrong… It was not wrong… Okay, a slip-up. Do it again… and again… 40 tests in and this… not going to swear… system was totally gaslighting me and PREVENTING ME FROM MAKING AN ACCOUNT THEY ARE FORCING ME TO HAVE TO MAKE.
Resolved it online, and didn’t have to be inundated with another 10 tests. Great.
Now I find out that my scripts, which I was relying on LLMs to create, were not actually building the “install” type build target. It was just building the standard “editor” target and planning to copy all the raw build output into the pipeline. Not impossible, but the install is dedicated to making a nice cluster of only what you need. Argh. These are the parts about gen ai that prove that it’s hardly as out of control brilliant as people falsely hype. Great tool, great for supplementing things I don’t have the expertise or resources to do alone (cough cough server infrastructure), but relying on it with no evaluation or scrutiny leads to exactly this mess.
Yet I have no choice but to persist. We have a server with a build pipeline, or we don’t.
After all this I was donezo. So we finally closed shop for the season.
We get a whole 3 weeks break! Thanks mandate for work-life balance!
So in celebration of the solstice, holidays, and new year, as well as my burnout, I decided to work for the rest of the month!
Signature Gaian celebration!
Now off the hook I took part in a week long Game Jam!
My colleagues I’ve been co-developing on the previous conceptualization projects and I, decided to try, out a game jam over the next week to see if we could drum up anything interesting. As always I’m honestly so surprised by how fast I can get things going. I think, by the end, it resulted in a pretty satisfying concept. Something I want to make a concept trailer for when I have some spare time. I could see it filling a nice niche, with straightforward mechanics and purpose, and an easily defined scope. I will probably share more down the line, when we can bring it together to better show it off.
So I continued to follow where the wind would take me and…
Out of left field I initiated a pitch for an O3DE editor interface overhaul to modernize the aesthetic and bring it into the 21st century.
It started as just a “we kinda brought it up, so let’s mark it down as something to dwell on” post. It’s good to make places to consolidate ideas and plans like that. A very strong point of the structure of the O3DE Discord server.
However, while trying to find references to the original “BlueJay” Design System and make connections from the discussion to action points, maybe pitch out a colour scheme… I ended up setting up on Photoshop and sussing out an entire mock of a modern GUI for the editor.
I like to think I did a good job. It did NOT look like this when I started. Didn’t think anything would come together for the fuss. But I surprised myself. It actually started coming together, and then I had a frame of reference that allowed me to simply work out the rest of the UI by following the pattern. In artistic pursuits, I do best when I have a proven starting point and can work forward, utilizing that structure.
This is actually how I was able to make Some Peoples Kids. A colleague and I mocked the characters, then, from that reference, I was able to make the entire series from that!
With those things tackled, I returned to the server.
I did not want it to linger into the new year, and I wanted to be able to call it done in this very Review. But it is not done yet. I will not admit to it yet, either. You’ll just have to keep reading.
I broke out from the pipeline work to do some fun things, which was that I realized I could search for equivalents to my many productivity and supporting services I depend on already.
I could set up Open Project for Kanban and Agile project planning..
Docmost for Confluence style nested page documentation…
And Gitlab for some local git repo hosting.
A eureka!
While evaluating Gitlab, another really gorgeous and full-featured service, I realized it also had CI/CD automation systems. We could hook into that nicely, and not only have a simpler pipeline system, we could capitalize on all the reporting and logging features that come from CI systems.
Now we‘re cooking with gas! Let’s get that stuff going!
Not enough RAM.
Ha.
So it turns out running 6+, always on, hosted services on a tiny server box uses a lot more RAM than no RAM. So the 32GB I got with the server box, to stay within a budget, was not enough. Not by a long shot.
Smarty. Who would have thought software uses RAM?
Ay, ay, ay..
As an immediate stopgap, I had to massively throttle all the services I already had, and flat-out shut down the Open Project service altogether. Thankfully, this brought the system back into an “operable at all” state. Not ideal, but everything was still working.
This all came to a point because, as part of an automated system, you need “runners” to actually identify that an automation job is needed, and then process that job. Runners can only do that by being left on in standby. Yet more RAM needed to have it ready and able.
But it was there, in standby, prepared to run the automation that I had refactored into a GitLab automation pipeline, both for Windows and Linux…
It’s actually happening.
In this, we answered some of the questions around this pipeline.
- Gitlab is the thing that detects the change in the repo. All of that is built in. Awesome.
- NextCloud is running on the same server, so you can just share the deploy folder with it and it can dole out HTTP access to the files over the internet.
- A python GUI allows the client to check for updates, and then decide when to actually process the update.
- Using “Deltas” we can identify only the changed files from the new “Install” build output, and if the client already has an instance of the engine, can simply download and patch in the deltas, rather than re-downloading the entire 3.5GB fileset. All from a lightning fast server internet connection.
Great! With answers come solutions!
Yaaaaaaah…
I had the solutions… but am still beholden to LLM output in this particular instance. Writing multi-platform, game engine build, git driven, automated systems is not something I know about at all.
So I ended up building in circles. The folly of using LLMs as a crutch, once again, clear as day. As the pipeline would start running through one stage of the build, and fail. The solution would solve the failure point, but then reintroduce failure points earlier in the build process. Then fixing those would introduce a failure we already resolved, etc. etc… etc……
It was so frustrating, so arduous, and certainly made me feel incredibly guilty for wasting so many resources just circling the drain.
Some of it was definitely under-appreciating the complexity of needs in an automated pipeline. Over the process, I broke out the automation into more and more incremental steps to try and preserve successful parts and isolate the broken parts. This was most catastrophic when I’d build an engine for 1.5 hours, then have it fail for an unrelated reason at the end and have the build be dumped because the stage as a whole failed due to that small mistake.
I figured it out piece by piece. Getting the right things cached; getting the right things loaded, and when; recovering from failures with the least damage on progress; getting the right files to the final placement for the rest of the pipeline; etc.
FINALLY, at the last hour.
At build attempt 76 I got it building all the way through, Windows and Linux included!
It was a miracle!
By build 84 I had it all complete.
I built a nice server-side version manager to handle the destruction of excessive builds or relics from the past 84 build attempts.
Then got the client-side engine sync going. It automatically detects the engines available on our file server, allows you to decide which engines to track for installation, you can modify the build destination, and everything.
The client successfully identifies whether it has a previous version or not. It verifies the current modern engine version as it changes. Calling update gets the relevant files and puts them in the right directory.
Yessssssss…
It even registers the engine if you don’t have it locally yet, changes the engine destination if you change the path, and unregisters the engine if you remove it.
Yusssssss!!!!! AAAAAAAAH~!
So. I did, in fact, successfully complete propping up the first draft of the GS server!
- I duplicated all our repositories from Bitbucket to our GitLab.
- I started recording plans and info in our internal docs, consolidating what is still relevant from the old.
- I have organized and hosted a ton of files and assets I only had stored on my local machine, allowing far easier introduction and use of media as we build our projects.
- You can check the engine files straight from Nextcloud, along with evaluating them through our version manager.
- Gitlab pipelines are proven. Our current one can be refined, but we can also build and deploy the game itself to Steam, deploy documentation updates for our GS_Play toolset, and anything else!
- I have already upgraded the system with 64GB of RAM and a hard drive expansion, which means we can reintroduce Open Project and add the PenPot service to our internal services.
- And now, we can tune our instances of the O3DE engine, and have it build and deploy right to our working computers on demand, and solidify a stable and far faster compilation workflow for our projects moving forward.
This is so cool. It was awesome to build. The potential is there and ready to be utilized. After all the crushing frustration, it was so much fun and so interesting to have figured out.
Everything server-based from here on out will be significantly easier to do now.
So what’s in store for the new year?
First of all, get sick on January 1st. Nothing lights a fire under someone than to start the new year’s push with a painful sore throat, hacking your guts out all day, and suffocating at night!
But actually.
Genome Studios is poised to start mobilizing around the many opportunities we have at hand.
Over 2025 we’ve proven we know O3DE development; I am capable of directing the rapid and pointed creation of prototypes and proof of concepts; We have a potent, robust, and growing tech stack that is making development in O3DE growingly easier; we have shored up our workflows and infrastructure to optimize our work; staunched the growing cost of depending on subscription services; and are still alive, excited, and driven to march forward towards the evolving realization of the studio as a potent participant in the videogames industry, as a creative company, a service provider, and leader in the O3DE ecosystem.
Let’s go.
And with that, December has been reviewed. As always, thank you so much for coming along on this journey and following us through the many throes of development.
Here’s to a terrifying, but thriving year ahead!
Btw, you can now rep Genome Studios on Discord with our brand new shiny server tag! Every bit counts!
P.S. Did you know you can keep up with our Month in Reviews by signing up for our newsletter? Just check below!
Want to keep track of all Genome Studios news?
Join our newsletter!















