Who am I?

"Look Mom, it's me!"

Hi! I'm Tony,

Ultimate (Frisbee) player, unintentional insomniac, unmistakable coder.

Topics that I'm interested in include deep learning, full stack development, and virtual reality. Currently my two main projects are studying neural networks with a research professor and helping some graduate students build an educational Google Cardboard app. In addition to this, you can often find me up late at night discovering some new coding subject to be obsessed about; NLP, ArchLinux, front end web, back end mobile, you name it, I've probably already researched it for at least 12 hours.

Don't be fooled though, while it may seem like I'm the classic coder I'm all about people on the inside. One of my favorite activites to do are hackathons simply because of the people surrounding it. The atmosphere of a diverse, bright (and highly caffienated) community manically attempting to build stuff in 24 hours for fun honestly just the best thing ever. Beautiful everlasting moments of triumph and hilariously unforgettable experiences of failure all jam-packed in a single sleepless day is really something you can't not bond over.

It's really no wonder that combining a night owl with a social butterfly produces such a hackathon-loving species.

Regardless of your spirit animal or circadian rhythm however, all types of people are welcome to follow along on my coding (mis)adventures. Be it friend, recruiter, incredibly-lost-and-confused stranger, or family:

I hope you enjoy,

VR Wizard

19 August 2017

Disclaimer: This article is still under construction. (Spell check may be needed)

It's official!

I, for the life of me, cannot record even a semi-normal tech demo. Unfortunately, the inability to do more than three takes due to rampant technical issues has rendered me incapable of presenting myself professionally. As I'll explain in the video, it was probably a blessing that I could get a Gopro, Vive, OBS, and Unity to work on a 950m graphics card. And luckily, my day job isn't being a Youtuber.
Below, I've written a brief overview of my coding process for this barebones VR demo. If you manage to make it through the following article, I've decided to reward/punish you with said unprofessional video.

See you then.


First things first, I want to recognize the fact that this is a very underwhelming coding project that is clearly not at all finished and I apologize for my astonishing procrastination skills. However, within 48 hours I'll be shipped off to my college dormitory (where I won't have access to a Vive) so I realized this would be the last opportunity to do a demo/write-up. My laziness aside, here's how this article organization is going to go:

1) Overview of three main "powers" you see in the video and how I implemented them in Unity.
2) Developer notes on other features, optimizations, or algorithms I made that weren't mentioned.
3) Challenges faced when working on the project.
4) Future plans for the "game" and how I plan to go about doing them.

Feel free to skip any and all of what's to come.

The "Force Push"

As a general rule of thumb, I suck at naming things and this one isn't an exception. So, the "force push" isn't really so much a push as it as a constantly-updating-force-vector-that-matches-its-respective-controller. Surprisingly, the majority of the project actually came from this single idea that a child pretending to be Luke Skywalker could now fulfill their dream in glorious VR. While this concept isn't new, a part of me decided it would be a cool idea to make a game revolving around the powers we grew up envying and pretending to have.

As the name suggests, this ability allows the user to "use the force" on a group of objects and motion them to wherever their heart desires. More specifically, the user points with the controller to a game object and/or point on the floor and all objects in the vicinity are now affected by the force.

How It's MadeĀ© psuedocode version:

- Create a "raycast" (a line) that extends out from the controller, parallel to the handle.
- At the hitpoint of the raycast, do an array collection of all movable game objects within a radius.
- Then, while effect still remains true, iterate through the array and update the objects accordingly (color, forces etc.)
- Once button is release, reset all game objects to default attributes

This series of steps mean that:

- No objects can enter the spell once cast
- No objects can leave the spell unless uncast
- Multiple spells may be acting upon one object, and the forces with arithmetically add onto each other

The last bullet point sounds like it wouldn't work, given the pseudocode I've just written, however through testing I realized a simple fix would just be to up One of the more difficult parts of coding this force was determining exactly what sort of interaction I wanted to have with the power. Was is going to mimick the controller on a 1:1 scale and just have a delay? Was the player going to just be pushing the object from a distance with greater force? Would the force just register one straight vector command and then turn off? If you really think about the mechanics, "the force" from Star Wars is a lot more complicated and technical than you'd think.
In the end, I just decided to go with {Article not finished}

The Force Shield

The "Force Explode"

"It's like fus roh dah if you had mouths all over your body" - 4:30am me.

Unfortunately this was showcased pretty poorly in the video, but you get the idea that we wanted to go Super Saiyan and have everything explode outwards from us in fear. In the overall broad concept of a dueling game, I decided that having multiple solutions for both defense and offense would create a lot of unique combat moments and experiences. Thus, I created a rather flashy move rolled defense and offense into one, keeping opponents on their feet and things exciting.

Developer Notes

Powers not shown in the video:

"I cast magic missile as a 6th level spell and it's f***ing awesome" - Dungeons and Dragons me.

As I'll explain later, I do want some sort of projectile to shoot at people because that's what makes a dueling game fun. A while back I was actually pretty interested in things like game theory, min-max algorithms, and zero-sum situations. But more specifically I pondered what made games like League of Legends and Dota 2 so mechanically fun and addicting. In fact, I was so obsessed I had mentally devised some crazy plan on how I would write the world's best AI that could analyze every variable in such a complex game and ultimate achieve a 100% winrate on the ranked ladder.

...yeah that panned out well.

But anecdote aside I actually learned that a lot of what makes such a game fun to play and watch was the ability to respond innovatively and quickly to any move your opponent made. You might be thinking "well speed chess isn't really all that different and you don't hear teenagers screaming over that." And that's where the innovation comes in. In chess you have a rather limited scope of the various permutations of uses for pieces. You take other pieces and you control spaces. In a situation such as League however, with only 4 main abilities to use, you find professional players will often stretch their usage to the last possible mile, doing everything it takes to win a fight. For example, a simple roll/dodge ability (Vayne Q) maybe be used to either dodge multiple abilities, reposition to be just out of range of the enemy, apply extra damage, roll just into reach to last hit an enemy, or simply arrive somewhere faster. With this sort of possible innovation, the audience has a lot less to learn about the game, and more to be wowed by when a player pulls of a mechanically difficult maneuver.

Honestly I could go on for hours about the intricacies of MOBA games but for the sake of sanity let's just leave it at that. In conclusion, just like League of Legends, the highly-acclaimed, triple-A title "VRWizard" must give players options to react with and innovate with the awesome abilities gifted to them in virtual reality. While we're here let me just say for the record: I'm very aware that this game is not anywhere close to even having the right to be called "a game" and this is all just an excuse for me to talk about theory that I've been researching for weeks.

Universal Force Push

"Like a universal remote but.... yeah." - 5:00am me.

So I went back into the code and this for some reason doesn't seem to exist anymore in the demo but originally I bound double grip to a universal force push. This meant that literally every movable game object would be at the command of the player, regardless of how far away. This would be especially cool to show off when a player would let some blocks drop off in the horizon far away and slowly bring them back up floating into view. Nothing much else to say here, the implementation was fairly simple.

Overall Notes

One of the things I mentioned in both the intro of this article and the end of the video is my computer performance. As most gaming enthusiasts know, a Nvidia Geforce GTX 950m is severely underequipped to run most VR applications. Amazingly, one of the perks to having an extremely barebones VR demo is that it runs extremely smoothly in all circumstances. So I want to emphasize the fact that even though this demo looks mediocre due to my lack of work, I honestly am not sure my computer could've run the demo + OBS if I hadn't coded everything from scratch.

Another thing I want to note is that while a lot of functionality may seem buggy in the video, I go on later to explain the various different combinations of "powers" that cause interesting interactions. So please keep that in mind when considering the cleanliness of the code.


One of the more important lessons I've learned so far is how to probably write an overhead manager script in Unity. As you'll see in the video, a lot of forces from both controllers can be intermingled and come out clean. While some of this functionality can be placed on the robustness of Unity, I'm going to spike down my stake now and say I definitely put some geniune work into this one. First things first, most Unity projects you'll see online start out with controller listeners on each controller. So the right controller will have a designated right-controller-script and same with the left. This is a completely fine way to go about doing it at the beginning, but when it comes to listening for conjoined button presses (like the force explode), things get pretty gnarly quick.

The first though that popped into my head when thinking of a solution to listening for simultaneous button presses is that I'll just have both controllers talk to each other. Easy peasy lemon squeezy. Turns out that's actually "Difficult impossible lemon not-easy."

The first thing that added some confusion was the fact that when the command actually worked, for some reason the ability would always be twice as powerful as I had set it (50 units of force would somehow act 100 units on an object). This turned out to be because I had written the if statement: if (bothControllerGripPressed) on both controller scripts, which made total sense at the time (listener receives message, checks if self object also true, act accordingly) but unfortunately just ended up making both controllers execute the same one function.

Another thing you slowly notice when trying to implement this is that Vive controllers can flexibly switch between the label "left" and "right." Turns out programmers are smart and whoever coded this one had the controllers automatically recognize if they were on the left side of a forward facing headset or right side. The reason why this became a problem was because sending a message to "the other script" was difficult if you didn't know beforehand which script it was (left or right). Because I had hard coded in script assignments like a dummy, the listener would occasionally work and occasionally not, depending on how I picked up the controllers to test.

Trust me, that whole situation a fun one to debug and eventually understand.

At this point you might say "well then why don't you just search for a non-self-referencing gameObject with the name "Controller"?" To which I'd respond, "950m. There must be a better way." And thus, the "Behavior Monitor" script was born. This script regularly recieved updates from both controller scripts, which were now re-coded to be identical (or should I say ambidextrous?) and created events accordingly, viola.

Future Plans

One of the first things I'd definitely like to implement is fast moving projectiles that I can shoot at other entities and objects. This honestly isn't really that much work (aside from well known bullet through paper" problem) however as stated before, I'm lazy, I've run out of buttons to put abilities on, and there's no other player to shoot yet.

Overall, this would lead to more immersive and high-pressure situations where the player actually has to remember a certain visual pattern rather than just press some buttons. Also I definitely think it'd be good practice to attempt shape recognition in 3D space. Who knows, maybe I'll try throwing in a simple neural network there and see where it takes me. And finally, while I obviously still have a lot more texturing and gameplay improvements to do, I still want to implement the main selling point of this game idea: multiplayer.

Diving Into ArchLinux

20 August 2017

Article coming soon.

The following video is a great example of why you shouldn't attempt to record a demo after staying up multiple nights to blast through i3wm.

Succulent: An Arch Linux Rice

21 March 2018

Article is in the works!

Coding In React Native!

21 March 2018

Article coming soon... I promise.