All Posts By

Umar Mukhtiar

Interactive Visualization with Kinect

By | Uncategorized | No Comments

Recently I started working on a quick demo to create an interactive visualization with Kinect. The idea was to create a music visualization that could be interacted with Kinect and after someone interacts a trigger causes a message to appear.

Starting out, I experimented with creating the demo using OpenFrameworks using CPU Particles. I experimented with particles morphing into multiple shapes/images and different visual combinations.

After a couple of days, I realized that this path would require time and effort to produce any decent results and I was planning to make this demo within 2 weeks. Looking up at multiple different solutions, I went for the one with the one that would produce the biggest bang for the buck i.e Unity3D.

I started experimenting with 2D GPU particles using different examples and did rapid iterations, but after a few experiments I realized that too required more time than I was willing to invest right now.

This hunt for a nice GPU particle solution led me to TC Particles. This package is incredible, I have gone up to 2 million particles maintaining 60 fps. So now that I had the particles resolved, I messed around with my audio device settings and read the input from my mic so that I can use the song currently being played as input.

For the visualization I just mapped the FFT to particle properties. Properties were mapped with a simple formula, Property = Factor * fn(FFT[sample]). I used the factor to scale the FFT value to produce a better visual result and used some kind of a function (e.g cos, log, exponent) to produce a different mapping for particle behaviour.

After that I just integrated a Unity Kinect sample to use skeleton tracking and added box colliders to the skeleton to allow interaction. I don’t really like this method of interaction, it would be far better if the particle systems was in 2D and we could collide directly with depth map data, but ill leave that for some other day.

All in all it turned out to be a nice proof of concept.

Dev Diary – Oculus Rift F1 Demo

By | Developer Diary, Game Development, Oculus, Technical, Unity | No Comments

From the time we received the Oculus Rift development kit, in July 2013, we were very keen on working on something related to Oculus, since there weren’t any good games or demos to showcase its true potential. Although Oculus is still in development and it will take time for it to reach its true potential, we felt the need to create something for the new realm of gaming it opens up. We already had Leap Motion and the Xbox Racing Controller on hand so we decided to make a racing game owing to our prior experience in racing games.

Oculus

Eventually we decided that a Formula 1 racing game would do wonders if we gave the gamers a real-time experience of driving a real Formula 1 car.

I did some research on the circuits of Formula 1 racing and found the Monaco Street Circuit quite fitting to our requirement as it provides a rich architectural view being close to the sea as well as some nice looking building along the track.

But as I started to gather more information on the circuit it became obvious that Monte Carlo circuit was not the right choice. The race circuit has many elevation shifts, tight corners, and narrow track. These features make it perhaps the most demanding track in Formula 1 racing. Due to the tight and twisty nature of the circuit, it favors the skill of the drivers over the power of the cars. Although we had Formula 1 fanatics in mind when we started off, putting a lot of effort into the physics of the car was impossible keeping in mind that we only had 3 weeks to finish the demo.

Motor Racing - Formula One World Championship - Monaco Grand Prix - Saturday - Monte Carlo, Monaco

On further research of racing tracks, I came across the Valencia Street Circuit which had the same geographical characteristics of Monaco and it provided ample beauty and rich architecture. Another fact that helped in locking down Valencia was that the last grand prix at the Valencia Street Circuit was held in July 2012, and since then the circuit is no longer host to the European Grand Prix. So in a way we are giving tribute to Valencia Street Circuit by making it our choice of track in the Oculus Rift F1 Demo.

Valencia

In the research phase of Valencia Circuit, Google Maps and Bing Maps were really helpful in categorizing and identifying the monumental buildings and structures that stood out. Keeping in mind, that not all buildings need to be modeled in detail because we basically wanted to show them primarily from the driver’s point of view. Some buildings which had unique architecture were to be modeled in detail.

Bing

Then came the documentation phase, in projects like these documentation is key in order to complete it in a timely and organized manner. Also, it is important to note that although we were a team of 2 guys working on the assets, its best to have the documentation laid out in a manner that projects can be scalable. So we could easily be working in a team of say 15 people and be organized. We used Trello for documentation, listing buildings with their names and assigning level of detail for each building.

We marked around 40 Buildings throughout the 5.419 km track, and through Google Street View referenced each of those buildings in order to have ample data to model them.

For Texturing, we marked similar tile-able textures from our database, as we maintain a library of textures we have made previously on other projects which are always useful.

So, after completing model and texturing work for building we brought them into unity individually to have their LODs set up and ready for the demo.

A demo for oculus

By | Game Development, Oculus, Technical, Unity | No Comments

So these days we have taken upon a very cool project of creating a demo for the Oculus Rift. Now i wont share the details just yet as the details deserve a few photos but i solemnly swear we are up to no good.

So the core needs for the projects are that it should look good, like really good. So with this core goal in mind I ventured into a research of existing solutions and latest tech which i had been missing reading up on as for the past few months i had been very busy with Death Mile. Of course as Death Mile was created in Unity 3D and i had been working for and loving Unity for the past year I started my experimentation in it.

Being a graphics programmer at the core of my heart as i experimented with DirectX 11 tech which I only had been reading up for a long time but never got to work with, I went crazy. The endless possibilities with the now near complete exposure of all DirectX 11 properties in Unity made my mind go wild but as I ventured deeper I realized that I needed to back up a little and make a realistic goal that I can finish in the 3 weeks planned for the demo.

During this experimentation I kept working on a test scene with the amazing Marmoset Sky Shop for Unity as the core lighting solution but wasn’t testing the scene with the Oculus itself as it was supposed to be just a test scene for testing techniques and as I was waiting for my new GPU to arrive. Yesterday I finally tested the scene and ill have to say it changed my view about the project quite a bit. I had played around a bit with the Oculus before this and so I knew the resolution is quite low but I still expected it to show some details. But with the Oculus a scene with moderate realism (IBL for ambient term, simple ambient occlusion baked + SSAO, soft but stable shadows to reduce aliasing and a few tricks here and there but nothing mind blowing) could produce a very good looking image as the blurring removes most of the detail. The new problem set that I am focusing on at the moment is to improve image quality while using Oculus, an interesting paper on that, and to find alternates to problems with aliasing in Unity that need fixing too.

Unity way of thinking

By | Uncategorized | No Comments

It’s been around a year since we moved to Unity 3D from marmalade engine. There were a lot of reasons to move to Unity which I will be discussing in a future post. One of the key reasons for the switch was that we had hired a new artist, and I felt that we were wasting talent without the artists being able to control the scene. I’ll have to say I am glad we made the switch.

Unity’s biggest strength is of course it’s incredible scene editor. I had always praised Unity for creating an interface that was so easy to grasp on to, and was even more impressed when I saw the team work with Unity in no time at all. All I had to do was install Unity on their systems, and before the day ended the team had a level ready.

Being an indie, we need rapid iterations to produce polished titles. I can’t stress here enough about how rapid and how many the iterations need to be. This is where Unity excels. The entire flow in Unity allows you to just play with the thing all day long. When the team is bored, they randomly cook up terrains and go on a long drive.

Still, for every idea we come up with there is always a feature/cost consideration, after which the idea goes to execution. But here lies the problem. In the old days, it would take a decent amount of time to even get a basic prototype of an idea into the game. Feathers would ruffle, classes would be modified, new additions would be made to old systems. So any mid-size change would require decent amount of time.

This is where Unity is a game changer. Unity allows you to make changes so fast that you don’t need to sit and think over a new concept in your head for hours. You can just prototype it. These rapid iterations allow you to do far more then contemporary engines ever could specially with a small team. The problem with this is, it needs you to change your mindset.

Point being, working for > 8 years with old school engines the tech have evolved. Unity has brought about a new landscape of rapid iterations. The feature/cost, I do in my head always had an arbitrary imbalance as when the feature seemed big I automatically somehow thought the cost is supposed to be big too. It has taken me some time to get used to it, but now that i have really changed my mindset i feel much more agile. We think up crazy new ideas and just implement them in a couple of hours to actually play what we had in our mind.

Using Dropbox as GIT Backup/Repo

By | Developer Diary, Technical | No Comments

We wanted the ability to checkout our recent code from different locations as we are sometimes not in same geographical location. We simply needed a versioning system with repository on internet. This would have meant using one of bitbucket or github like services or setting up our own server. None of that is technically a challenge but we had been using Dropbox for sharing documents and thought to use it for GIT. This meant that we could use Dropbox for everything from code to documents to web backups.

We also wanted to take backup on NAS.  This way we get backup on 2 different locations.
We realised that best way to use Dropbox is by creating bundle of the code + assets and putting it in Dropbox folder.

We created a batch script to automate the whole system of adding new files (using gitignore of course), committing them and then creating bundle out of it to put it on Dropbox and local NAS drive. The script prompts to input commit message before committing the code.
Here is code for batch file where p drive is our NAS drive mounted on Windows.


echo "Commiting Nerdiacs Project Repo"
set INPUT=
set /P INPUT=Type input: %=%
cd e:GamesRepo
e:
call git add . -A
call git commit -m "%INPUT%"
call git bundle create E:DropboxTeamSharedCodeRepoGamesRepo.bdl master
cd p:GamesRepo
p:
call git pull origin
pause

I hope this helps others in automating bundle creation and backup on Dropbox.

Artificial Intelligence in BLAZ3D

By | Developer Diary, Game Development, Technical | No Comments

As we have already released BLAZ3D on iOS and Playbook, I would like to share a bit about the development process of the game. Specifically the Artificial Intelligence (AI) part.

BLAZ3D was initially planned to run on devices such as the Nokia N95, Nokia N82 and later on iPhone 2g. So, considering the specs of the phone we were quite restricted with the hardware. Now we needed to find a sturdy solution that could give us intelligent AI that acted smart, and yet won’t eat a lot of processing. Actually considering the amount of load that bullet physics was sucking on the device, we had allocated near zero space for AI.

Now the AI was supposed to be able to pickup powerups, use shortcut ramps, and avoid obstacles. With these goals in mind we started designing the AI, but we quickly realized that we needed to keep AI constantly competitive too, as the levels had a large variety in their difficulty so the AI would quickly become too easy to defeat or at times too hard.

So the solution we came for the near zero CPU intensive AI was to use simple splines. But not using only the traditional best path splines, but instead we distributed splines. Through each level we manually placed small regions of splines, connecting options for the AI.

For example a straight line to go through a straight piece of tunnel would keep going straight until we reached powerups. After which 3 splines would cut out of the main spline, and go through the powerups, and finally merging into one. We had some complex scenarios as well when we extended splines, to give the AI logic to turn sharp through the next turn and catch the near powerup, or to turn loose and catch the powerup near the far wall of the tunnel.

Each line had an identifier in its name and a list of children in the name to tell the AI options it had to choose the next spline. But this still meant the AI was pretty dumb, so to give it a bit more intelligence we simply added one more variable to each line, for the probability of using that line. Then we assigned each AI its own probability on load. So if there were 3 AI in a race, there would be one AI how would always be tough to defeat (much like the red enemy in pacman <sad to see that wordpress spellcheck tries to correct pacman>).

So by this simple variable of probability in each spline, we managed to suddenly bring life in each character with this equation:

[highlight type=”one”]finalProbability = nextLinesProbability + enemyProbability + Random value between 0 – 0.25[/highlight]

We looped through all the lines in the child list with this equation and if one of the finalProbability goes over 1, we go ahead with that spline. This simple system allowed us to give each AI a dynamic feel with just a few additional operations.

 

Analyzing cross-platform engines for mobile phones

By | Android, Developer Diary, Game Development, iPhone | 4 Comments

Quest for Mobile 3D Engine

When we started development on our first title, the first thing as usual was to find our needs and select an engine in accordance to them. After considerable debate, we knew what we really needed was a cross-platform engine for mobile phones which should have solid 3D apis and support as many platforms as possible. This hunt coupled with our naivety as it was our first title, led to us the EdgeLib Engine. Our experiences with it was scarring to say the least, but definitely a huge learning experience which we will leave at that for a post in the future. This made us hunt for a new stronger solution, that we can call our base, our motherland, and that is exactly what we found with Airplay SDK now called Marmalade.

Marmalade is what we call a truly cross-platform engine. Supporting over 6 different os’s, and truly abstracting all api from it is a very big deal, and providing extensive support and a huge array of helping tools is unseen for an engine that supports indie developers. Instead of going into all of its features i will just point out at a few that really caught my eye:

Memory Management

Marmalade provides a very nice solution to manage memory. Instead of giving fast alloc calls and calling it a day, marmalade has a system of buckets. You allocate your data inside buckets, and specify the size of each bucket in a configuration file. This way you know exactly how much memory you are consuming in which area. Secondly, with such a system it becomes easier for marmalade to manage memory, so memory leaks are immediately detected. Add to this system an internal system of tracing call-stacks and you get a system which points to the exact code block that’s causing the leak.

Scalable 3D pipeline

Airplay does not only abstract out basic OS API calls from all platforms, but, due to the largely varying hardware of mobile phones, it also has a scalable 3D pipeline. Support between GLES 2 and GLES 1.0-1.1 in itself is a huge feature and couple that with a low level fallback software renderer and you get a game that would run on any phone in the market, may it have a GPU or not.

The Simulator

Airplay has a solid simulator for emulating the game on the PC/Mac . This simulator extends its features way over what any other phone vendor has provided yet, with features like accelerometer simulation, changing graphics drivers, compass simulation and some very detailed metric of your game. It allows you to configure the simulator in real-time to go as close as possible to the real device. Other than that the metrics the simulator provides are a very handy tool to optimize, showing all API calls made and the number of times they were made. It also tells exact memory usage of each bucket, which is again useful in optimizing your game for low memory use.

The Extra Tools

Airplay provides some very neat external tools, which are integrated which the library very well. Airplay has a system of resource groups and build styles, which allows you to sort out your games data in nice little resource files with which you can easily create multiple editions of your game for different phones and handle all the assests easily using these files. The best part, we just need to specify the compression format in the resource file for each texture, and airplay automatically compresses them on first build, and loads them in-game from the compressed file. This removes a lot of headache like manually converting each texture for different compression techniques for different GPUs. Airplay also provides derbh archive module for game data compression, which coupled with texture compression can reduce your games data by 60-70%!

 

The feature set never ends for this beast of an engine. Of course it has its fall at times too but they are always very minor compared to the huge list of features it give us.

 

[highlight type=”one”]We’ll have to say Marmalade is one of the best mobile 3D Engine we have used for Mobile Game Development.[/highlight]