Last month (maybe longer) I wrote about one of the top three production mistakes we made in Thimbleweed Park. Today like like to talk about the second one.
Just to be clear, these are production mistakes, not design or programming mistakes (although sometimes the line is blurry).
The first one was not integrating FMOD into my engine. As I wrote, it was a Penny Wise and Pound Foolish decision.
The one I'd like to talk about today is Continuous Integration, but first a little primer.
"What the hell is this witchcraft you call continuous integration!" I can hear you saying and don't feel bad. I wonder how many indie game devs use it. My guess is a lot fewer than should (a quick poll of friends is standing at 0%).
Continuous Integration (or, CI, as the pro's call it) is when a separate and often dedication machine is continuously building (compiling) your game whenever you check in code.
This is good for two reasons:
1) If you check in code that won't compile on one of the platforms (I don't care how good or careful you are, this will happen to you) the CI machine will let you (and the rest of the team) know. It helps ensure that the game can build at all times. If you can run a battery of unit tests, the CI machine will often do this as well.
2) Since the CI machine is a standalone machine, it's dev environment is a known quantity, so installing some goofy tool or new version of python on your personal dev machine isn't going to introduce oddities in the build.
3) Bonus point. You always have a build ready to put into test and then distributed to Steam, Xbox, etc.
I've used CI on previous projects and it can be a pain to set up, but it's a time saver from then on. It's indispensable and you should always use it!
Did we use CI on Thimbleweed Park? Of course not! That's why I'm writing this blog entry.
When the Thimbleweed Park engine got to the point where it could actually be used and David was brought on, I thought about CI. Previously I had used a dedicated machine in the office with Jenkins installed. I only ever needed to make a Windows build (or builds that used VS) so it was one machine with a few different flavors of builds.
For Thimbleweed Park I need to make Windows, Xbox, Mac, Android, IOS (and later Switch and PS4) and Linux builds. Without some fancy hoop jumping, this was going to require three machines.
My mind fuzzed over and I said "later".
Throughout the project I kept revisiting CI, and being overloaded with work, I kept saying "later". As the end of the project rolled around it seemed pointless since the project was almost over. Of course, I was wrong and we continued doing updates and ports for a year.
At the time I was looking at CI, cloud-based services like AppVeyor and TravisCI either didn't exist or never showed up on Google searches. Today you can just use these two cloud services to build Windows (or anything that needs VS), Mac (Mac and iOS) and Linux. Android can be built on any of them. If you don't mind waiting 10 or 15 minutes for the build to start, both services are very reasonable (and free for open source projects).
In the end, I built the Windows, Mac and Linux builds on my local machine. I had the entire process scripted, so it was running a single bash script and all the Mac versions were built. I ran a Windows VM on my Mac and would launch it was run a .bat file to built all the Windows flavors. Same with Linux, boot the VM and do the builds.
Thimbleweed Park was a tad complex because there were two parts that had to be rebuilt (the engine and the game), and while intertwined, they were separate and it was common for one to be rebuilt and the other not. It was also complicated by all the game build tools being on the Mac, so you couldn't actually build the game code on Windows. This didn't matter since the game code/files were 100% cross-platform. I could build them on the Mac and they would run on Windows, PS4, Switch, etc.
It makes CI a little more complex because your CI might be building the engine but something had to merge it all together (another CI process). The game code needs to be built first (and you don't need to rebuild it for each platform), then the engine process could grab it and merge it, or a third process could combine it all, but everyone would have to know when each piece was done (if a new build was needed). My point isn't that it's impossible, just that it's hard and a problem that I ultimately (and unfortunately) didn't need to solve.
Next time for sure!!!
No, really, next time for sure.
Even with Appveyor and TravisCI it can be a pain to setup and maintain in the long term, for us we also start up new repos/projects all the time and dont want to waste a day or two each time mucking wround with the initial CI setup.
We ended up writing up something that let us setup a bitbucket/github repo up at the same time as CI (see here: https://github.com/irreverent-pixel-feats/ecology).
Its been a real time saver for us, especially when we want to prototype something, but still want the CI, or if we a participating in a game jam, its nice to just have CI there, and we dont want to waste time during the jam setting it up (abut cant do it in advance when we may not know exactly whatwe are doing).
Its written in Haskell which isn't everyones cup of tea, so i'm not suggesting anyone uses what we have written, but its definitely worth writing something like this up
(we got a blog post on it here: http://irreverentpixelfeats.com/posts/development/2018-07-21-ecology.html)
Big Red Button
If I had done CI, I would have used a cloud CI service. I don't want a bunch of machines running 365/24/7 in my house. The whole point is to NOT have to manage machines. I have better things to do.
I agree, as mentioned in your post, that setting up CI-systems can be quite time consuming and this was a great way to reduce the risk of being entirely reliant on a specific setup (which can, and probably will, break at some point).
The setup on the Jenkins machine was quite simple after that with just having to run a simple gulp job with npm and the added benefit of everyone being able to run the jobs from their machines as well - if needed.
It kind of changes the question of "how" a job should run to a mere "when" - which is a great deal easier to answer to me.
I'm Aaron. I'm from Australia
I just wanted to say that your blog is truly an invaluable resource to people like myself who are aspiring to be adventure game developers. I was contemplating how best to design a point-and-click just before I played Thimbleweed Park. The game unexpectedly taught me about puzzle dependency graphs right when I was trying to figure out how on Earth you did it. I researched the graphs and they led me to your blog. Since then I've been re-visiting it for other nuggets of wisdom like the retrospective advice in this post of yours.
I've been designing puzzles with the graphs for months now. At first it involved some serious abstract thinking because I don't have any art yet (its hard to imagine puzzles in backgrounds you don't have). Anyway, I see a lot of value in designing the puzzles and story first and foremost and the charts really do that well. I really can't imagine how such a thing could be achieved without the use of the tools you've given me. After a lot of practice I'm starting to get more efficient and creative with my puzzles (It's also good to know that they prevent me from making leaps of logic that could result in serious game breaking consequences).
I'm a child of Monkey Island. Born in '89, my parents sat me down with TSOMI when I was only six or seven years old. They told me puzzle games would make me smarter and yours pretty much taught me how to read and write.
Now I'm designing my own game and you're still teaching me so much
I just really wanted to say thanks for everything :)
P.S.: Purchased TP, played it all to the end. Did make heavy use of hotline though.
You should look into the new possibilities of Azure Pipelines; that provides hosted OSX, Windows and Linux build pipelines.
I haven't tested it myself yet, but will eventually look into it, it sounds cool...
Docker can be helpful in this cases