snõwkit dev log #2 (history)

sign up sign in

The time is upon us - another dev log!
If you missed it before, snowkit dev logs are a way to express the things I am thinking about, working on behind the scenes and a space to address questions, breaking changes and future changes in one convenient place.

previous editions
- #1 “assets”
- #2 “history”


Let's start with why I might be rewriting git commit history on the snow repository!
There are some important things to take note of here.

When I started writing snow I was in exploration mode. I code in a direction and see what happens. I had a small team on board to help test things and use it in projects, and I used github as a convenient place to store the links to WIP builds, zip's, apk's, basically any random files I was experimenting with made it into the repo.

This added up to a considerable amount of debt in the history that I didn't pre-empt, foolishly, and before I knew it the repository on a clean git clone as is was around ~200MiB. I left this for as long as I could - and recently the opportunity came up to get this out of the way. I had been exploring build server options again, I had been waiting on Haxe 3.2.0 to finalize some large refactoring I had done, and a few other things needed to align.

Now a clean git clone with full history is 17.73 MiB. If you run git clone --depth 1 which fetches only the last commits, it's down to 3.21 MiB which is obviously a lot better.

I've read into it A LOT before rewriting git history, and a common mantra is that it's evil/bad/world ending and I can see the cases in which that becomes a problem, but I have not found a viable alternative that basically doesn't involve deleting the entire repository and changing all the other (more relevant) history anyway. In reality, I experimented with it briefly and pushed to master with the history rewritten so I decided to just lean in, and remove any large items from the history entirely. All the rest of the history is still there - just the erroneous junk and ndll/ folder was removed. If I did it now, during alpha, then it would be done for good and in a week you'll have forgotten I ever did it.

Updating after the rewrite

If you made changes locally to snow for some reason that you want to keep - back them up. That way you can be sure you have them.

If you have wild amounts of errors after trying to update snow you have two main options (if you aren't a git wizard, which is ok, not many are):

  • git pull -X theirs This tells git to merge using "theirs" (i.e snow repo) changes. This doesn't ask you about your own local changes, this says "update and use whatever the repo has". If you're running haxelib update snow, you might want to manually go into the folder with command line and run this command manually, to be sure there are no quiet conflicts.

  • re-clone It's a lot smaller now, so you could just delete it, and re-install snow if you feel that's easier.

If you have trouble, remember that git reset HEAD --hard will clear any conflicts so you can try pulling again, and if all else fails reinstalling snow is the same as a fresh clone.

Where did the ndll/ folder go?

See the automated builds header below for links and details

Part of the pruning was to remove large files from the history and stop weighing it down - this includes files that are actually needed to use snow! I could push the ndll's at the new slimmer tree, and it would still be quite low in size compared, but there are some important things to note about upcoming changes that make this idea a bit of a non starter.

What is an ndll!?
The native code that gets used by your application ends up in one of two forms. It's either static (embedded in the app) or dynamic (loaded dynamically at runtime). The term ndll refers to a neko compatible d(ynamic)l(ink)l(library), but over time the folder includes the static prebuilt versions of the snow native code and it's dependencies in a convenient location and predictable location.

Ideally all platforms would be using static linking except where it's not possible, and originally they were all using dynamic linking except for one. As the transition happened both were added to the builds and have stayed in their for convenience.

We're switching to automated builds for all native targets

I'll go into detail on this further down, but I have (re) setup the automated build server that runs snow builds for us.

I am getting rid of ndll/ as a concept in the very near future
I have been waiting for some things in Haxe 3.2.0 that will allow the code to be a lot simpler and easier to maintain, easier to modularize and reduce the amount of complexity involved. This also means there will be less code and simpler interfaces for the native sdk to deal with - making our lives easier all round. It does mean that prebuilt snow libraries won't really exist in their current form, and the need for them will fall away.

I'll go into detail on these changes as they are happening, but needless to say lugging in a few ~30MiB of ndll folder for the history to carry around for good when they are needed only temporarily, would be silly. I don't want to rewrite history again, and once you add it to the history, it sticks around.

snow native SDK and other clean up

You'll notice that I am busy shifting ground rather heavily on the snow repo, I have many changes lying in wait for the Haxe 3.2.0 release to roll around so I can start finalizing the native sdk, start streamlining the snow internal code for it's final form and more. Also now that I've had a good run at the current code in practice, I know what I want the final API to look like, and before I can bring in the new luxe asset system and later the new luxe renderer, I have to do the work necessary to get that done (not to mention ports and stuff that are in the shadows).

I go out of my way to keep the builds working, but since we are all on the same branch there are going to be times when things get into an unusable state briefly, and there are going to be times where I must make necessary changes to push us out of alpha.

Keep an eye on that chat or the twitter and if you get stuck let me know. And remember since we're using git, you can always roll back to a previous commit and continue as you were.

Beautiful Buildkite


Buildkite is a continuous integration service that ticks every checkbox for what I want from automating builds. It doesn't just give me what I need and get out of my way, it goes above that and excels at being the best tool for our complex needs - a cross platform set of frameworks like snow and luxe. With just the core snow library weighing in at 22 native binaries, across 5 platforms with 15 different architectures, Buildkite makes this process a breeze.

Early on in snow development I set up a Team City server and it was quite nice to work with compared to what I had tried as alternatives. I tried Jenkins and Travis I basically ran through many solutions (including my own), which all worked to varying degrees but always SOMETHING crucial wasn't there. Team city won by a large margin because it was easy to use, easy to maintain, was agent based (more on this in a second) and exactly what I wanted in concept, but in practice I found myself wanting more. It's a Java app, so it brings all of that baggage with it.

What I reaaally wanted was a brutally simple, to the point, agent based build service that would handle the things I don't want to, and let me handle the things that I do.

Enter Buildkite!
While I was stumbling around setting up TeamCity (again) for the new builds recently, I landed on the Buildkite home page somehow. I was really taken with their setup, it's exactly what I wanted. It runs my scripts on my machines as agents, which allows all the platforms snow has to be built exactly as I need, and just triggers automatically from git commits.

The my machines part is quite important when dealing with platforms guarded by NDA's and projects that aren't properly public facing. For example a lot of console hardware requires a fixed IP address to develop for, so you literally have to pin a machine down behind a specific network configuration in a specific location, and leave it there.

Buildkite continued to woo me, it's robust API, good documentation, lots of developer activity on the repos, It looked perfect. The agent code is also open source (and written in a great language, golang), which sweetened the deal.

After getting over excited I realized that they didn't support a Windows agent! At least not on their main pages and documentation... That would immediately disqualify it for me, and before the sadness set in I noticed there was a Windows agent under the releases tab on Github.

I found they also had an active Slack chat and decided to stick my head in and see if I could get my hands on Windows testing sooner than later, as it appeared to be in beta but required activating your account for early testing.

The development team were more than accomodating, and very soon after that I had set up all agents including a Windows one, and had all targets covered.

The Buildkite team were interested in what I was doing, and were interested in helping me solve the problems I had for CI, and in fact - a lot of the things I just wanted that ”would be nice” (like displaying the badges for the build status for each step), they added immediately. Aside from some bugs on the Windows end initially, and some minor hacks in place to get it working at first, all of the bugs that were found have been fixed promptly and it's been smooth sailing.

I would highly recommend Buildkite if you're looking for an automated build service that can build anything that can run a shell script, I doubt I'll be switching away any time soon.

Buildkite status and downloads

So a shiny new build server! How do I get access to the ndll files?? How do I see what's going on?


To use the ndll files from the build server, simply download the ones you want for your platform, and put them in the root of your snow folder.

ndll/ <- They go in this folder (ndll/Windows, ndll/Mac64/ etc)  
Disclaimers and such

I know it's not super ideal to have to copy some files into a folder, but for the very short term it will do. I'll update the documentation to reflect this short term, and if you do have to update the binaries (after the first time) - I'll be sure to put a note in the commit logs as well as in the chat, on the twitter, etc.

So: Download the platform you need, copy it into the folder, continue as before.
Please let me know if you have any issues with the builds! And read below regarding Windows builds, as there are some important changes.

Windows builds now use VS 2013

Recently, Microsoft did away with the concept of the Express version of Visual Studio and released the full featured version of Visual Studio for free. It's called Community Edition, and is an excellent initiative because it solves so many things.

Get it here :

Firstly - Visual Studio 2010 was broken out of the gate. It required a SP1 at minimum just to work (I don't know why they didn't make a bundled installer). Secondly, Express (the free edition) used to be crippled in ways that made certain things very hard. In order to do 64 bit builds, you needed the platform SDK. To use the platform SDK, you had to uninstall 2010, install the SDK, reinstall 2010, and then install the SP1, or else it wouldn't work. You also had to jump through MS account registration hoops and more just to get the basics. You had to manually configure the path of the SDK once installed, otherwise it wouldn't find it, and often this was flakey for command line builds (which we were using). The debug windows were limited and missing some important tools if you're dealing with the native level. It didn't allow plugins for the editor. It's also 2015, it's been 5 years that stuff was completely messed up and was never fixed.

All in all - VC2010 served it's purpose and was a good tool at the time. But it's thankfully redundant now and should be forgotten. For this reason, I am phasing out support for VC2010 as the default. I'm not disabling support for 2010, but the prebuilt libraries and configurations will account for 2013 as the default, and if you're using 2010 still you'll probably need to build compatible binaries yourself.

VS2013 is full featured, it's free, it just works, has all the bells and whistles, we have no excuse.

Speaking of keeping up with the times ...

I've been meaning to mention this ahead so people are more aware of how we will deal with these type of things:

During alpha especially, when new versions of dependencies are released, we will always align with the latest version.

The easiest example of this : Haxe itself. When Haxe 3.2.0 is finalized and released the focus will shift towards 3.2.x and higher only. I won't needlessly break code against 3.1.3 short term, but I won't be maintaining legacy code paths during the alpha. Doing so will get in the way of ensuring that the code around 1.0.0 is as clean and relevant as possible for what is current at the time, without historical code preventing it from maturing into it's best form.

You shouldn't worry about this in general, but take note that the code will always align with the future because that's where we are heading. packages

Another thing I should note is that we have been working on some pretty fun stuff for, which is a text editor with customizable features (like build toolchains and code completion for Haxe).

I've teamed up with Jérémy and Tilman and with some solid testing from Dan and quite a few others to build some decent support for the editor.

Part of the goal of snowkit as a collective is to band together and ensure that the ecosystem around Haxe is a great place to be - and this is one of the first examples of where this is starting to show.

You can already use this for haxe or flow builds in full, with completion and error checking and more. Be sure to check the readme as well, as there are some requirements.

Check out the typedef completion Jérémy has been experimenting with in Atom:

This is how the error linting currently works:

You can get the packages with full readme below.

For just haxe + hxml builds:
If you're using flow:

What about Sublime Text?

The goal in building strong support like this in atom, was to separate the code in such a way that we can port the majority of it to Haxe! What does that give us? Well Haxe 3.2.0 adds the python target, and sublime plugins are written in python. Atom plugins are written in Javascript... so we can just share all the code.

It's great to use Haxe in practice like this, so keep an eye on for the plugins and packages, as you're gonna see the shared Haxe code take form here in the very near future which will mark a major update to the Sublime Text plugins bringing them as close to parity as possible with atom.

One other cool thing that Sublime Text has (finally) added is a way to display custom popups and panels which will allow us to expand into things like doc hinting and linting errors and so on as well, rather than trying to shoe-horn everything into the completion lists.

Wrap up

It seems this post is just getting to the important bits and still being pretty long. I'll split the other parts into dev log #3, which I'll post quite soon so we're all on the same page.

I can't wait for all the new stuff to be in, I think you're gonna like it.

Also, while you're hanging around, have a look at the dev log Nico and compostface have posted for their game using luxe. There is a gif for every step of the development they've been doing which is fun to see.