It’s Game Time

It’s no secret to those who know us, my wife and I love games. We own a large collection of board games and frequently host game nights for our friends. We like all genres of games, from deep strategy, to light party games and everything in between. In most games we are pretty evenly matched. I am sure a large enough sample size of our gaming history would put us pretty close to a 50:50 split. There is however; one category of game that skews wildly in her favor. Word games…

My wife loves word games, and she is very good at them. Me on the other hand, not so much. Scrabble, boggle, wordle, bananagrams, I am outclassed 90% of the time. So when I decided to make something personal to give my wife for her birthday (and maybe tackle a new development project at the same time) I figured making her a word game would be a great idea!

But what kind of word game? I wanted something quick and simple that could be played any time she had a spare minute, but with enough depth to keep her engaged on the couch on a Sunday afternoon. I decided on a simple anagram style game. Given a series of letters, find as many English words as possible that can be made from them. Add a 1 minute timer for a little pressure, and a score multiplier to incentivize careful guesses and “Brianneagram” was born.

From a technical perspective, the game runs as a simple Angular web application. It leverages a local dictionary file with more than 370,000 words to validate guesses. The puzzle is made by randomly selecting a 7 and an 8 letter word from the dictionary and shuffling the letters together, so there is always guaranteed to be, at absolute minimum, 2 valid guesses per puzzle. I use a looping 1 second interval to create a countdown timer, and add player guesses to a list to check for repeats and increment a score multiplier.

Once I had the base framework of my angular app laid out, adding the bindings followed suit quickly. This project is by no means pushing the limits of Angular, but it was certainly fun to play around and learn the basics. Typescript is a language I am only somewhat familiar with, so I was learning on multiple levels at once.

Arguably my favorite part of the project was putting together an Azure DevOps pipeline for the automated deployments of the web app. I wanted the game to be playable on a mobile device, and rather than set up an emulator, deploying the code and testing it on my own iPhone and Android tablet was my go to strategy. Having an automated pipeline for these deploys saved me a huge amount of time in making changes and testing bug fixes.

In addition to the pipeline, I made some Bicep templates for the Azure resources that back the web app (a simple storage account with static web hosting and a CDN endpoint for caching and SSL termination). Why automate only half a project right?

All in all, this was a small project, but definitely a fun one. I learned a lot in a number of areas and tried my hand at a handful of new languages, tools and frameworks. I hope my wife has half as much fun playing the game as I did making it.

Lights, Camera, YouTube!

I recently decided that I wanted to create a YouTube channel and start making a handful of development tutorial videos and vlogs.  This clearly required some level of recording/editing setup. Being both a hobbyist on a budget and new to making videos, this required a hefty amount of searching to find a tool that was both easy to learn and inexpensive/free.  After reading a number of reviews, installing  a handful of applications and playing around for a while, I feel like I have found the best of both worlds. Today I am going to share my thoughts on what I found in both recording and editing.

Recording – Flashback Recorder Express: 

www.flashbackrecorder.com

The Express version of the flashback recorder from Blueberry Software takes the blue ribbon on easy to use. Within 30 seconds of installing the software, I had configured my recording setup exactly how I wanted it. Webcam enabled and positioned, desktop icons hidden, screen region for recording selected, audio tested and ready to roll. As I made a handful of test recordings, I branched out into some of the more advanced settings. Everything was right at my fingertips and exactly where I would expect to find it. It also offers some nifty features in terms of scheduled recordings, or starting a recording when an application launches. The launch screen also has direct links to a number of tutorial videos. While the tutorials are related to editing (which is a pro or higher feature), I am a huge fan of tutorials to expedite the learning process. Nothing beats a “hello world” to learn something new. I was also very pleased by the fact that Flashback Recorder doesn’t impose limits to recording length or apply a watermark. Very helpful for making my hobby efforts look and feel more professional.

The “free-ness” of the application only started to show itself when it came time to edit and export my recordings. The express version offers minimal editing options and only exports to WMV. Since I had expected to have to pair it with an editing program, this was not a deal breaker. The price of the Pro version is a very reasonable $49, and after experiencing the ease of use of Express, I’m definitely going to take the 30 day trial of Flashback Recorder Pro for a spin and I expect I will likely make the purchase.

I am a far cry from a professional YouTuber and I’m sure I am only scratching the surface of features that are staples for the pros, but Flashback Recorder is a great, easy to learn tool that I would highly recommend to anyone looking to make some recordings.

Editing – Shotcut:

www.shotcut.org

After trying a number of editing tools, a trend began to develop, complexity. One after another, the tools proved to have a tremendous learning curve. I was very pleasantly surprised when I found this mold breaker. Shotcut is an open source editing tool that is a welcome change for new learners. The UI begins sparse, with only the most obvious tools exposed (export, import etc.) but collections of other features bundled in easy to access windows that can be toggled on or off from the control ribbon. The control windows can also be pulled out and re-organized to customize the layout to your individual needs.

Immediately after downloading the application, I was prompted to check out the tutorials to get started. These were all well paced and easy clearly touched upon all the features I would need as a first timer. I quickly found myself working through all of them in a desire to know more and learn about what options were at my disposal. The export feature in Shotcut comes with over 50 presets to compile and output your creation (including one for YouTube). Shotcut could also easily be used as a file type conversion tool on its own.

Royalty Free Music – Incomptech:

incompetech.com

My video would be boring without some audio to go with it. I had heard the name Kevin MacLeod before from some of the YouTubers I follow, as being the go-to guy for royalty free music. I headed over to his site and I was not disappointed. More than 1000 tracks for every genre under the sun. You can search by genre, length, mood, bpm or just browse the list by name. All the tracks can be played in your web browser to sample them and each one comes with a copy-able snippet of accreditation. I picked out the one I wanted and imported it into Shotcut with a simple drag and drop.

The end Result:

After about an hour of putting together a collection of clips, applying filters and transitions, I had a video I was ready to export. The export process was fast and simple and I now had my Runtime Development welcome video for my YouTube Channel.

So my while built in microphone and webcam aren’t going to win me any awards for cinematography, I feel like I was able to come up with a pretty reasonable product for a first go. This will help pave the way for more tutorials and vlogs as time rolls on. Maybe this might even inspire someone else to take the plunge and try something new.

Real world spaces. Blurring the lines between HoloLens devices and emulators.

By far the most amazing feature behind the HoloLens and mixed reality as a whole is the ability to build upon your real world. I personally feel this is where mixed reality blows virtual reality out of the water. We can take real spaces, where we work, live, anywhere we spend our days and nights, and build them out, expand upon them and make incredible things happen.

For myself and other developers who have dropped the coin for a HoloLens device of our own, we have the incredible opportunity to work first hand in our own environments while developing. This allows us to experience our own apps in great detail and precision that no emulator could ever replicate for such a unique experience. Now don’t get me wrong, the HoloLens emulator does a pretty spectacular job of allowing someone to work and interact with holograms or the spatial mapping the HoloLens offers. The deploy time to the emulator vs a real device over Wi-Fi is also significantly faster and I personally vet most my changes on the emulator before taking the time to deploy to the device for an extended run.

Because I tend to work back and forth between the emulator and real device so frequently, I found it exceptionally useful to take my real life mapped workspace from the device and import it into my emulator. This allows me to interact with a very familiar space on the emulator as well as on the device. Especially without a visual representation of the patial mapping turned on in the emulator (more on this in a future post), a foreign room can make it difficult to get ones bearings or to get the desired results out of their actions.

Being able to export a room from the device onto the emulator has a number of great use cases. For example, being the only person in my circle of development friends with a physical device, I can map their home work spaces on my device (already done since they all wanted me to bring it over so they could check it out for themselves) and provide each of their emulators with a representation of their work space. Or maybe you work for a company that’s fortunate enough to be developing HoloLens applications. A HoloLens for each developer likely isn’t in the budget (kudos to you if it is). By exporting and importing the space, any number of development machines can each run their own emulator with a common space. This makes sure everyone has a similar experience, and frees up some of the demand for real device time.

Because I think this is so useful, I thought I would share here how to set it up. Here is a quick breakdown of the steps we need to make this happen:

1. Configure the device to use the device portal.
2. Connect to the device portal and export your room.
3. Import the room into our emulator.

Configuring the device to use the device portal is all done through the settings menu on the HoloLens device. Follow these simple steps to configure it:
1. Find and open the settings window from the main menu.
2. Select “Update” from the settings window.
3. Select “For developers” from the left hand menu.
4. Enable “developer mode” and scroll down and enable “device portal“.

Now that the device is configured, we need to connect to the device portal on a PC. We will need the IP address of our device. We can get this from the settings menu on the HoloLens under Settings > Network & Internet > Wi-Fi > Advanced Options. Punch this IP address into your web browser to head to the device portal. The first time you access the portal, you will be prompted to create a username and password:
1. In the browser, click “request pin“. This will display a pin on the HoloLens.
2. Enter the pin into the field in your browser along with the username and password you want to use.
3. Click “pair” to connect to the device portal. You will likely get a security error, but can ignore it.

The device portal has a ton of cool features that we will go over in more detail in future posts. For now, select “3D View” from the left hand menu. Scroll down and click “update” under spatial mapping. This will load a 3D image of the room as your HoloLens knows it, into the window above. If the room isn’t complete enough for what you want, walk around and look at the sections of the room to flesh them out. Click update to get a new look at what the room looks like.

Once you are happy with the state of the scanned room, click “Simulation” in the left hand menu. In the room name field under capture room, enter a name for your room like “Basement” or “Office”. Click the “capture” button to download a .xef file.

Now that we have our exported room, it’s time to upload it into our emulator. Start your emulator by running  any HoloLens project from Visual Studio with the HoloLens Emulator as your deploy target.

Once the emulator starts, click on the Tools button on the sidebar menu (The >> button). Select the “room” tab from the window that opens. Click “load room“, select the .xef file we got from the device portal and click open.

Your device is now using a copy of the room from the physical device. You can pilot around in the emulator to check it out, or click the device portal button right above the tools button in the emulator menu. This opens the device portal for the HoloLens, and by selecting the 3D view menu option, and clicking update under spatial mapping, you can display the room in the same way we did for the HoloLens to get a better feel for the space.

That’s all there is to it! You can repeat this process any number of times with any number of rooms and swap them out whenever you want from the tools menu in the emulator. I hope you found this tutorial helpful and can make us of your own real spaces in your emulator.

 

Text Occlusion in Unity

Over the past few days, I have been tinkering with some simple HoloLens projects. I hope to go into more details on these in a later post, but today I thought I would share some information regarding 3D text and occlusion with unity.

For any of my illustrious readers who might not know, occlusion is when an object is blocked (occluded) from the view of the observer by another object (it occludes the other object). Think of a person walking on the far side of the street. If a bus were to pass between you and the other person, the person would be occluded from your view. The same principle applies in 3D modeling.

The issue I discovered is with 3D text in unity. By default, the shader that 3D text uses is the same one for GUI text (a shader is a script that contains the mathematical calculations and algorithms for calculating the colour of each pixel rendered). In a shooter video game, the heads up display with your health and ammunition should always display in front of you, in front of any other objects (GUI Text). This is a much less desirable effect when displaying the time on an alarm clock 3D model. The time shouldn’t be visible through walls or out the back of the clock. The text there should be occluded.

This sample uses a very basic 3D cube and positioned slightly behind it is an empty game object with a TextMesh component applied to it. As you can see, the text shows right through the cube. No occlusion out of the box here.

So if the default shader doesn’t get me what I need, I need to create my own. Luckily, I’m certainly not the first person to encounter this issue and be seeking an answer. Unity provided the shader script for me along with some basic instructions. I thought I would document my process here and help flesh out some of the areas which I felt were a bit unclear in the instructions.

1. Create/import the assets we need
The 3 assets we need are a shader, material and a font. We can create a new empty shader and material by right clicking the project section of our unity window, selecting create and finding the shader and material from the context menu.

As for the font, a quick google search for “Arial truetype font” turned up a .ttf file from GitHub (thanks to JotJunior for the file). I simply downloaded it and dragged it from my downloads folder into the project window. Download it from here if you want to follow along.

https://github.com/JotJunior/PHP-Boleto-ZF2/raw/master/public/assets/fonts/arial.ttf

A quick rename of my assets to help identify them, and I’m ready to roll.

2. Configure the shader
double clicking our OccludedTextShader or right clicking it and selecting “open” will allow us to edit the content of the shader, to have a script that correctly makes use of occlusion. Replace the entire contents of the OccludedTextShader with the following:

Shader "GUI/3D Text Shader" { 
	Properties {
		_MainTex ("Font Texture", 2D) = "white" {}
		_Color ("Text Color", Color) = (1,1,1,1)
	}
 
	SubShader {
		Tags { "Queue"="Transparent" "IgnoreProjector"="True" "RenderType"="Transparent" }
		Lighting Off Cull Off ZWrite Off Fog { Mode Off }
		Blend SrcAlpha OneMinusSrcAlpha
		Pass {
			Color [_Color]
			SetTexture [_MainTex] {
				combine primary, texture * primary
			}
		}
	}
}

3. Configure the material
Our OccludedTextMaterial needs to be configured to use a shader of the same type as the OccludedTextShader we just created. Select the OccludedTextMaterial from the project window and in the inspector, change the shader dropdown value to GUI/3D Text Shader. Now we need to set the font texture for OccludedTextMaterial. We expand our arial font file in the project window and drag the font material into the font texture window of OccludedTextMaterial. While we are here, we will set the desired font color. I chose red to have it stand out boldly against the default white of the object that will occlude the text.

4. Configure the game object with our text to use the new assets
By selecting the HelloWorld game object (which has our text mesh on it) from the hierarchy window, we can then drag our arial font from the project window onto the font field.

5. Apply out new OccludedTextMaterial to the Mesh Renderer
Now that we are using our imported font, we then need to update the MeshRenderer of HelloWorld to use our new OccludedTextMaterial. Drag the OccludedTextMaterial from the project window onto the material field of the MeshRenderer.

And with that, we have achieved occlusion! As you can see, the cube now occludes the text behind it. If we rotate the camera to the back side of the scene, we can see the text is then in front of the cube. Our text interacts visually the way we would expect it to in the real world.

Thanks for following along. I hope you found this article useful or learned something new about shaders or occlusion. Check back soon for more how-to articles or updates on what I’m doing.

Episode IV – A New Blog

Well, here it is. The first kick at the can at building and running a blog. Still working out the kinks, figuring out what I like “out of the box” and what has got to go. Why a blog you might ask? Who’s going to see it? Why even bother? At 5:00 am Saturday morning, coffee in hand, I asked myself much the same thing. The only answer I could come up with was, “why not?” I had an itch to try something new, so here it is.

By 5:15 am, with my second cup of coffee in hand and a notepad++ document full of potential domain names, it was time to get things under way. I use Amazon Web Services at work, so the experience there felt like a safe choice to start with for hosting my own project. Luckily, the free tier is pretty flexible for hobbyists and weekend warriors like me.

WordPress immediately jumped to mind for a blog tool based on popularity. After a couple hours of going through the WordPress documentation and building my required components in AWS, I was able to get myself a VM up and running, an RDS instance and MySQL database operational, PHP configured, WordPress Installed and running and my domain name mappings sorted out. *Phew* Time to reward my efforts with some GUI work.

The rest of the weekend quickly vanished to fine tuning, cleanup of the myriad of sample and test files I had created, reading more documentation on various AWS resources and generally ignoring my wife and dog far more than I should have. With Sunday quickly coming to a close, it’s time to put away the laptop, make some lunches for Monday and get ready for the work week. Such is the way of a hobby/personal endeavor. All the more motivation for “Episode V – The Blog Strikes Back”.

In future posts, I hope to include more photos documenting  my progress through things like this, and to go into more detail on some of the challenges I faced and how I overcame them. Maybe down the road, someone else at 5:00 am with their own coffee in hand, asking themselves “why not?” can benefit from what ends up here.