snõwkit dev log #1 (assets)

sign up sign in

Sometimes, when there's lots to be done, and things are moving ahead nicely, there are things worth mentioning before they happen. There's a lot going on in my head as to where we will end up, when we might get there, and how - but you can't read my mind (I think).

Plus, There has also been a lot of news and interesting developments, like The Westport Independent (made in luxe) is getting published by Coffee Stain later in the year on Steam. This gives us a lot of things to discuss, so this might get long.

Community dev log!?

The snõwkit dev log posts, starting from this one, will discuss in various detail all libraries, developers and aspects of the snowkit community. The dev is a key word here, they will relate to the development of the community and the libraries within it.

Topics like general programming posts, future goals, calls for feedback, statistics updates and a place to explain changes on the horizon will fit here nicely as well.

I thought maybe it would be fun to give a view into the type of decisions I am making, why, and how I am solving things along the way as far as development goes. If not interesting, at the very least it helps me catalogue thoughts and catch oversights as I explain things “out loud”, so I will be doing it anyway :D

notes on focus sheets

One key point to make about the dev logs - they are a supplement to the ongoing focus sheet for a particular library. The focus sheet and github milestones are still the place to keep track of progress of the current set of tasks for each library. These posts will add detail.

Two, some libraries have less distinct focus milestones in their current lifecycle. Instead of active focus sheets, these libaries are driven by their dependents (especially during alpha). These dev logs will serve a way to look into the changes as and before they happen. An obvious example of this is snow - 99% of the changes or work going into snow will directly relate to the projects building upon it, like luxe. This is also true for flow - changes and fixes come from use in the wild as the libraries using it expand.

This simply means that instead of active focus sheets (that are overlapping tasks with other focus sheets) - these libraries will get their voice in dev log updates here, until they have something that makes sense for a milestone.


When I was working on luxe and others prior to announcing, I had a clear goal of specific things that needed to be in place before flipping the switch. Once that switch was flipped, I wanted to give some space to the community and the libraries, allowing the dust to settle a bit. This gives time before committing to any specific roadmap or features and allows assessment of the worst offenders getting in the way of using the libraries in practice.

I find I work better when I am solving actual problems, and not theorizing on the best tasks that fit a year from now anyway, so it has been good to see if my initial goals aligned with reality after release. So far so good!

I do however know that people are curious, and excited, and love to hear news and follow closely as they are investing time into these projects, and I want to honor that with more insight from my perspective. As many people find their way into the community and various types of projects take root, it makes sense to start sharing more about the thoughts and processes as they are happening, rather than after they have happened.

This includes roadmap details, which some people have been asking for, even if they aren't in stone or dated or anything, they help. So here we go!

luxe alpha-2.5~ish dev

ripples turning to waves

The alpha-2.0 milestone is coming along nicely, around 50% complete by task list terms, with quite a few of the annoyances in the windowing solved. Next it was time to start resolving the asset related tasks, and there are quite a few rabbit holes to pursue in doing this.

snow unified data transport
I built snow (and thus luxe) with the idea that a single common type should be used for all forms of “data” transfer like file loading, or textures from pixels, or data uploaded to OpenGL - all of that stuff should share a single type. This drastically simplifies the type of mental gymnastics needed to transfer data between systems, and just makes the most sense to how I work.

Originally, this common type was based on some really handy code from NME, the snow.utils.ByteArray class, which I had ported when I made the original version of lime back is 2013. Once I started from scratch with snow, I took the bare minimum needed to reuse that class, and used it for everything. This worked out great because the assets, the OpenAL, the OpenGL - everything had some use for it.

The other data related code I had taken across were the “Typed Arrays” (which at the time I pulled from OpenFL, based on the NME code too). These were almost implemented according to the specifications but had a lot of overlap in the code, and used a lot of Dynamic, and had a few bugs that were preventing them being the backbone at the time. I didn't want to rely on that type of change back then, so I did a bunch of refactoring and cleaning anyway, and instead implemented unit tests (thanks to help from chman) for them to ensure they were at least correct, and let them be. I decided to roll forward with snow.utils.ByteArray as it would do fine, and switching out the underlying type later would be relatively painless when I could tackle them properly.

”newtypedarrays” branches
While working on a couple of exploratory branches locally for the snow native sdk (mentioned in next dev log), I was already wishing I had the typed arrays done. After much fiddling and consideration I decided to switch to the typed arrays as the primary data transport, unify all their code, fix the remaining issues, make them stronger typed on construction, and remove any old code that was brought across in the original implementation (including the snow.utils.ByteArray class).

I had really enjoyed using the typed array spec when developing in JS for the last few years, and had a long history with them in a multitude of contexts. This would allow snow to focus down on a single reliable set of types for data, which are easy to use, powerful, and are well documented and well tested in practice. I wanted to do that sooner than later, but hadn't yet started on that properly until alpha-2.0.

After Joshua had mentioned trying out the typed arrays as abstract types (compile time only classes), I was pushed over the edge to tackle them head on as they directly related to alpha-2.0 milestone, and would drastically simplify much of the tasks ahead. I implemented a whole new version from scratch in a separate branch, and redid that branch twice while solving some specific issues. I needed the ES5/6 spec, as unaltered as possible, and on JS target the browser native types must be used in full. This led to the third and more final rewrite (not final just yet!), which I put in a separate library called hxtypedarray.

There are still a couple of differences that I am in the middle of resolving, but the new implementation includes huge performance improvements at every turn, and opened up opporunities to further simplify the data transport in luxe and snow, especially with regards to the native sdk and native integrations in general.

cascading changes
Because the data transport primary type had to be changed, it meant digging into every corner of the libraries and making the swap. This obviously uncovers a lot of implementation specific nuance, and old code that needed to be addressed. Instead of leaving that for later, it made the most sense to tug on all the loose ends on my way through. This led to loads of benefits from the change, but also it will incur some important API changes in luxe, sooner than I would have had them migrate into the user API. But it's better, and is discussed in full below.

All in all, the new typed arrays and their new implementation is a boon, and I can't wait to migrate those upstream, once I resolve a few pending things on the JS target.


Kristian, who is making The Westport Independent (alpha-2.0 is named after their studio) had asked me about packing assets up into a bundle for a build of their game for GDC and PAX this coming week, the beginning of March. This was something which I had considered for alpha-2.0 but didn't want to pack (heh) too many tasks into one focus sheet given the potential for slowdowns.

Because they are shipping a game in luxe in the near future (on nearly all platforms, but desktop first), and they are one of the oldest pre release users, I am always happy to dedicate time to helping them achieve that outside of the focus sheets tasks. I have always given pre-release testers priority focus, as this all wouldn't be possible without their help.

A wild mint use case appears
Having to pack assets meant having to have a simple tool to make that easy, and this was a good opportunity to apply mint to a real use case and see if it was up to the task yet. The results were pretty good, I managed to knock out a usable tool in a few hours, which even included fade effects, undo/redo, and a quick preview window for looking inside the text/image/audio files for convenience.

For a “disposable” once off type of tool, one of the ways I use mint myself, it was great. I managed to catch quite a few usage needs that weren't present and fixed a few mint bugs along the way.

The resulting bundles worked well too, a single standalone class let us use the assets from the packed parcel file by dropping it in and calling one line of code from the game. When things "just work" (and when you know how they work in the first place) is what I enjoy most about using luxe for everything - it's solving the things I wanted it to solve for me.

Mostly, this was all done using the master tree, with a simple class to do packing/unpacking, and mint to do the UI to select files with. All the same flexibility to do it existed already except for a few which I added quick. The changes in alpha-2.0 will include this use case stuff as well, making it worth the time spent.

It's unlikely I'll release this tool in it's current state as it made assumptions for the task at hand - But don't worry, this is quick work and the polished user facing ones will be a whole lot better.


asset api finalization

One thing that has to be addressed during alpha-2.0 is the way the Resource system worked from before. It started out based on the original Resource construct in the old phoenix c++ version, but migrated to being a combination of a cache manager and I don't even know what. It was holding its place in a reasonable fashion though, so it could wait. Once I started migrating types that weren't a Resource i.e String, or a ByteArray, or a Dynamic for JSON data - I found that there was an obvious need for the separation between the container and the resource itself. At the time, I implemented the JSONResource and TextResource and so on as a stopgap to handle the needed direction - but this was a temporary solution.

The biggest problem with the structure at current is the lack of clarity on how Luxe.loadTexture can magically “lazy load” and nothing else really can fit this model. And usually, that's the first usage pattern you encounter so it becomes dissonant soon after when you try another resource type. This was a side effect of how things were initially coded, allowing lazy loading on textures, and it became a bit invasive causing more trouble than it solved. Most troublesome is the disconnect in API, the lack of consistency it introduced over time, and the internal code that has to account for this stuff. This has to go!

Assets are Async
I've said a lot that assets are async by nature, they don't load in the same line as their function call (and when they do that's because they are blocking the primary execution!). Except for cases where assets are blocking which you'll probably want to avoid, every other case of asset loading is async and this is an inescapable truth you can't skirt around without concessions or assumptions. In the case of blocking calls, you might want to show an animated loading screen, which would bring you full circle to async nature again.

With that in mind, snow (and luxe) have always enforced this, except for a few places where it was required to let slide because of when that code was added.

It's still easy to get by (most loading has an onload function for convenience) and the parcels make short work of having things already loaded at a later time, but the proper implementation will finalize this in practice.

One of the worst problems (that the user doesn't face), is that it causes less code encapsulation than I'd like - in other words, the Texture class has code to handle when it's finished loading, and re-apply any states that changed in between creation and load completion. On top of that, texture dependent classes (like Sprite) would have to do similar hijinx, with it's own code, and account for that too. And classes dependent on sprite (SpriteAnimation) as well, would have this debt spreading further. I wanted to remove this cruft and unify all the code so things are predictable and don't require hoop jumping.

However, it's important to me that the usage remains simple, while also becoming properly consistent and clean. The current behavior in the API opened up possibilities that are half measures, and introduced bugs that shouldn't be there (like SpriteAnimation first frame pops, or size assumptions)... All of this is simply because the code can't make informed decisions about the state of an asset. No more!

New Asset Phases
While implementing the new resource system in alpha-2.0 the state of an asset is emitted every time it changes, and at any time an asset (and user) can know where it's at in it's lifetime. For example, the initial states look like this: unknown, listed, loading, loaded, invalidated. Each state is automatically propagated up into the asset manager, allowing you to track and control the lifetime of an asset explicitly.

For example, you might say var image = Luxe.assets.get('img.png'); and then image.unload(), which will "rewind" the state from loaded to listed. The asset system knows about it, but it's not cached in any way. Later, you could call image.load() which would re-cache the data and allow you to use it again, without having to re-track the paths and so on. Finally, you might say image.destroy() which rewinds one further, to the unknown state, and the asset system is no longer aware of that asset id.

The other states are fairly straight forward, but I'll touch on invalidated as it relates to the next section. This is useful when you “lose” the original resource and it needs to be reloaded. This state is set on assets that are flagged for hot-reloading, as well as GPU resources that get lost when the context is invalidated as a whole.

This just means that if you change a file on disk, it can automatically reload those changes into the asset system, and if you are listening for loaded events with this in mind, your code would handle reloading/recreating them without any work. For a lot of these (like Texture) the actual object reference would still be valid, it's GPU ID would be updated, and if needed it could refresh its file data to upload without intervention. So, potentially, no change would be needed on user code to handle the reloads unless it was needed.

Asset “tagging”
To give another concrete example behind the invalidated event - and to mention one of the features that will be included in alpha-2.0 - Let's look at the idea of a variant/tag for asset paths. (the “tag” name is not final, just a descriptor for now).

To clarify things quick:

  • asset path = path on disk/source asset
  • asset id = referenced id in code when fetching it.

Let's say you have your assets set up for high and low quality settings in an options menu. With an asset tag you could easily differentiate the actual asset path to load with this system. Let's say assets/sprites/high/player.png and assets/sprites/low/player.png - These are two asset paths that are referred to by the same asset id. The code refers only to assets/sprites/player.png, the id, and tags set in the system will control which path is actively set.

Now let's see,
If the user switched to low quality settings and hit apply, you could change the active quality tag value, maybe using an api similar to assets.tag_set('quality', 'low');. Any assets tagged with quality, will invalidate themselves (if there is a matching tag path found). If the auto reload setting was enabled, the texture would automatically be updated and the user would be seeing the low quality asset when it's was done reloading the changes.

This is also applicable to the commonly used retina style (@2x, @3x asset paths), but instead of manhandling the file paths themselves, it would be tied into the tag system. This gives you more control over what happens and when, and why. At start up time you would set the appropriate tags and any subsequent loads will fetch the tagged variant for you.

Asset API changes coming in alpha-2.0

So, here we are with a neat flexible tag system, and a nice evented resource list which handles per asset, per parcel, and global asset changes and events for us - now we are left with tackling the inconsistencies mentioned earlier.

To do this - the asset phases need to become explicit and fall in line with their actual nature. This means that there will be an explicitly clear separation between load and fetch. In other words, Luxe.loadStuff will fall away, because it's blurring that line (and I don't remember why I put them on the root luxe API anyway).

To elaborate the change in example form:

load - Luxe.assets.load_texture('image.png');
fetch - Luxe.assets.texture('image.png');

This allows a lot of positive change long term, and allows finalizing the underlying asset api for good. It comes with some obvious upsides too :

  • load phase is required for fetch phase to work (as before, except for Texture), but the api usage makes this explicitly clear in user code
  • fetch phase is typed (Texture, String etc) and used consistently when using an asset
  • code using the fetch can predictably expect the resource to exist, if it doesn't return null. This solves the code problem mentioned earlier, as it removes any load handling from the usage of a resource.
  • allows more obvious error states, if the asset is null, it's not loaded (yet) and that's it. Code expecting a texture can error early if you gave it a null one, and the rest of the code can rely on the fact that it was already checked.

It also includes one downside that I wasn't content to let slide, which i'll show with the following example. This is an example you'll find in common use, to just quickly get something on screen:

new Sprite({  
    pos: Luxe.screen.mid,
    texture: Luxe.loadTexture('image.png')

This worked because the loadTexture call returns a Texture instance that will fulfill "later". In the sprite code, it waits for the texture (mentioned earlier, this isn't ideal), and when it arrives it creates the geometry. This is the overlap between fetch/load that causes the dissonance in the rest of the API.

Now, let's look at the new API in comparison:

new Sprite({  
    pos: Luxe.screen.mid,
    texture: Luxe.assets.texture('image.png')

Oh, now the fetch is here, but the load is missing? This means that the sprite gets a null texture and will default to a 64x64 white block. So how do we add the load stage? It must be added with the least amount of friction if possible, and in such a way that the user can dive headfirst into exploring without understand what a “parcel” is, or why they need it.

The usual onload pattern (which i'll get to later) would be ok for one image, but for 2 or more it becomes cumbersome:

Luxe.assets.load_texture('image.png', function(texture){  
  new Sprite({
      pos: Luxe.screen.mid,
      texture: texture

Solution: The pre-ready preload node!
Or what could be called the “default parcel”.
In your luxe Game class you would have put the above code in the ready handler, which takes care of the fetch part.

In alpha-2.0, an “internal pre-ready parcel” is added and it's exposed via the config handler. This simplifies a lot of user code actually (because they can avoid playing with parcels unless they need them) and gives back the same simple usage with very low friction. In 99% of cases the config function already exists, so it's a one line change in this case, or 3 lines without a config function.

With the new pre ready/config preload node, the code becomes the following ( which is significantly simpler compared to before) and solves all the concerns nicely:

 //preload assets will be loaded before ready is called
override function config(config:AppConfig) {  
    config.preload.textures = ['image.png'];
    config.window.title = 'devlog #1';
    return config;

override function ready() {  
    new Sprite({
      pos: Luxe.screen.mid,
      texture: Luxe.assets.texture('image.png')

This preload node will match the structure of a parcel (json, text, fonts, shaders, textures, sounds, etc) and gives you a way to quickly dive in without hassle. For a large percent of smaller games, they might never have to use a parcel and can rely on the default one, as they would be loading everything up front anyway.

Promises and onload handlers
The other value of separating the states comes in the form of Promises - all asset load calls will return a Promise for the asset instance. Promises are a programming construct that "promises a value in the future", or a failure if the promise can't be fulfilled.

This matches assets quite well, and is nearly identical in usage to the normal callback style. Also, any query of the value "after" the fact would still get the value it's expecting, as promises work retroactively. Promises also ensure that assets have either a loaded or a failed state, they can't be in both states, which is fitting in this context.

In practice, the current Resource code was based on the nature of promises - You always get a valid Texture instance and it "promises" that later it will be fulfilled, and when it does, onload was called. If you attached an onload listener after the fact, it would call it with the value, allowing retroactive listening. They are basically the same at heart.

The problem is that the concept was baked into the resource specifics, and I'd rather have this exposed as a tested, usable API that can offer promises as a general utility in the toolset. A proper promise implementation unifies all the code and reduces the chance of bugs and inconsistencies in the API because each resource has shared code.

load returns promises
Soon all load calls will return a consistent and reusable promise API, something that was already being put to task that alpha-2.0 will finalize.

If you're curious how this looks in comparison, this is the current code :

var texture = Luxe.loadTexture('image.png', onload_handler);  
texture.onload = onload_handler;  

The promises use the concept of "then", so a promise looks like this:

//load is a Promise<Texture>
var load = Luxe.assets.load_texture('image.png').then(onload_handler);  

They are quite similar, but how about multiple assets, and a callback when they are all done?

var one = Luxe.assets.load_texture('one.png');  
var two = Luxe.assets.load_texture('two.png');  
var three = Luxe.assets.load_text('three.txt');  

The new utility: promhx
promhx by Justin Donaldson and contributors seemed to be a good fit - It's well tested, cross target, and has a flexible enough API to fit the needs that I had. It's also licensed in the same way as snowkit libraries and Justin is happy with me dropping it into snow as is as an internal dependency. This would give users access to the same API on snow and luxe level, which I'll show some cool examples of in future posts.

So far it's been smooth sailing with promhx. I've made some minor modifications to match the ecmascript spec slightly closer, and implemented the snow event loop it needed to work properly, but nothing major needed to be done for it to work. As far as a utility goes, it includes more than just promises, like Streams, which are basically promises that can deliver more than once (onclick.then(..)). This might come in handy for things like data transfer (downloading, simpler audio streaming) down the road so it's worth having on a low level.

For now, it will be used in three places instead of callbacks, to give them a try out in the core code. In snow, the new io function returns a promise (1), which in turn makes the init stages async (2) as needed, and the last place would be the asset load_* calls in luxe (3).

Let's look at the snow init sequence after the change. Originally (as a stopgap) this init setup used to try to force sync asset loading for the asset manifest and config.json files. Now, each init stage that is async uses a promise to continue or fail. It ends up reading a lot simpler logically, and ensures that every async stage is handled properly and errors are caught in the right place. All init errors now gracefully land in a single place:

        _debug('init / calling host ready');
        is_ready = true;
}).catchError(function(e) {
    throw Error.init('snow / cannot recover from error: $e');

They are used judiciously in the core, so that they could seamlessly be replaced with callbacks again if the need arose (like when in situations where code might become un-portable because of a complexity debt).

Keeping up with changes

As you can tell, there will be quite a few underlying changes in alpha-2.0, definitely all for the better in my long term view. Only some of these will cause user facing changes, and they won't all happen at once. They will all clearly be marked when they do, and I am happy to help or elaborate if you get caught up with code changes.

Keep an eye on the commit logs in the mean time, and the tasks lists for alpha-2.0, as these will be listed and knocked down while I wrap these up here.

Automation antics

If you keep a close eye on the snow repo, in the second branch, you may have noticed that I have been re-activating the Continous Integration server setup for all 20+ snow binaries. This was set up before (before I released publicly) but I took it down because it was proving a bit troublesome to deal with TeamCity.

In the very near future (this week probably) I will write a separate post detailing the processes and services I've tried and the one I landed on. It makes things really easy, and I am poking holes in their Windows beta stuff while I am at it. In the post specific to that, I'll link to the build server output where you can get updated platform binaries for all targets as soon as any commit is made on any branch.

This is cool, because users testing the experimental branches don't need to rebuild snow themselves, nor do I. I can keep the master branch updated with all platforms whenever change happens.

tests/docs can also be automated
The other thing that I will automate short term includes the tests/ folders in both snow and luxe. This allows a user to download a single zip will all 40+ luxe tests for their platform, for the latest versions, and just run them. This is nice for curiousity, like if someone interested in the libraries just wanted to run a native samples without installing, they could.

It also works with the primary purpose of the tests/ folder, validation. When a user has an invisible geometry bug, they can run the test from the repo code on their machine/config and see if it's localized or a general engine bug.

With all samples/guides/tests being compiled every commit, it will notify us of any single code example falling out of sync with the main code. I was doing this manually before (which is how it occasionally slips by). The updated samples/guides will also be directly embedded in the docs/guides/posts, so any fixes made to the samples code in the repo will be reflected immediately in the public view for testing.

Lastly, this allows me to update the API docs (and generate a downloadable zip link) from the latest code at all times, ensuring that the live api docs are always in sync with the code.

All of this lets me focus on the code and docs more, and will lead to a more consistent experience as we pass over the last alphas up to beta.

Toward alpha-3.0+

Hopefully alpha-2.0 main tasks will be closed fairly soon, as most of the work is done in one form or another. This means that the asset/windowing stuff is mostly finalized, and I need to start considering tasks for alpha-3.0:


alpha-3.0 will definitely give the Entity/Component/Scene systems some attention in some form or other.

Currently this facilitation is some of the "youngest" code in the engine and lacks flexibility in a few places, something I want to resolve the worst of sooner than later. The biggest changes will be internal for the most part, rearranging the code to be more data focused and reducing stopgap implementations to a minimum without changing much of the user code, if any.

I haven't decided a compliment topic just yet, but will soon. Feel free to throw ideas in, I'm interested in what you feel is most pressing to your needs.

after alpha-3.0

Some of the larger changes that have been ongoing since before public release will start to make an appearance somewhere in the next few alphas. Some of these come with significant code cleanup/fixes but nothing too breaking (at least nothing that wouldn't already have been trickled into the repo in preparation :p).

One includes a more consistent implementation of change notifications ("property observing"), which affects classes that notify of changes on their sub properties (like Transform, Vector etc). In other words, when vector.x changes it should notify the Vector itself, which has to notify a Transform probably, which has to notify Entities and so on, the current implementation was a place holder and did the job ok, while I solved some design issues regarding the concept. All of that stuff I have resolved and will begin to migrate into the code soon.

The new code also makes it a lot simpler to do two way data binding - something which mint needs to elegantly handle the changes in the constraints and reflect the changes in the controls, but more on that in a future post.

Color/Vector/Matrix/Quaternion abstracts
Along with new observers, the math classes have been migrating toward being abstracts (compile time types), which offer all the same API, but at runtime don't exist. More specifically, a Vector(x,y,z) at code time, would become an Array[x,y,z] transparently. On a couple of targets this includes native performance benefits (as it uses native arrays), and it is a lot more streamlined because there is far less code overhead or garbage collection related activities generated from the base types.

Because of the new Vectors/Matrix/Quaternions becoming abstracts, they lose their ability to do runtime reflection (an OK sacrifice with proper affordances) - something which tweening engines often use to set values on arbitrary targets when they update. To solve this (and switch to strong typing), myself and Andreas have been working on a more final version of Delta.

Delta solves those problems while strongly typing all the lower level tweens (colors, vectors, etc) and making a game friendly, more flexible API for tweening that would replace Actuate inside of the luxe.tween package. Actuate would still work as a standalone library of course, but it probably won't be able to tween vectors/colors and other stuff without using the onUpdate callback once the abstracts are in place.

Overall, the new abstracts introduce large scale engine wide benefits, and Delta will be a very familiar API for most. And, of course, Delta will be in place long before the type changes, allowing transitioning time for projects to switch or account for the change themselves.

Another big update heading toward an alpha is introducing the new rendering backend, which finalizes the underlying structures for the future. It removes overhead that shouldn't be there, and resolves the bottlenecks that are present in the current stopgap code, and includes a lot of cool debug tools, api's, and control if you wanted to roll your own render path. It wouldn't be impeding you in any way (as this is how luxe is generally designed, but the rendering is some of the oldest code, so it has some of the older concepts left to reduce).

Each of these topics are a whole 5000 words on their own if I went into all the fun details, so I will just mention they are going to be coming in the near future, I will give ample warning of changes, I will give clear upgrade paths, and the changes will be infrequent and incremental, so things don't explode one day when you check in.

It's exciting, though, as all these things would be sliding in the work that's been ongoing into alpha sheets - finalizing the last few major goals for the alpha period as a whole. After those complete, beta/stability testing can start properly and a proper release would be impending.

There are other things to be tackled, which I will detail in subsequent blog posts, as this one is approaching infinite length already.

Part 2

Just kidding, we're done here.

Let me know if you'd like to know about specific stuff, I have a lot to go on (mint,devise,etc) that I would love to share, but I am interested in elaborating on things that users or new users or curious on lookers have questions about.

As always, follow the snowkit twitter account for news and updates, join the community chat and all feedback and comments are always welcome.