Below are some projects that I've worked on, in no particular order. Apologies for the long list format/large photos and hence super long load times. I haven't had a chance to make this performant (yet!).

Co-robotic Exploration and Curiosity-driven Machine Learning

Over the summer, I was a guest student at the WARP Lab in the Woods Hole Oceanographic Institute. I worked on extending the ROST (Real-time Online Spatiotemporal Topic modelling) algorithm to work in mapping contexts. I also developed some basic controllers for the BlueROV and SlickLizard (custom surface vehicle developed in the Warp lab).


I have been working with Markus Kayser, Sara Falcone, and Nassia Inglessis to create a multi-robot system for construction, specifically making fiberglass composite, tubular structures. These robots are centrally controlled and are able to climb on their own structures. We built 20 robots from scratch that in turn produced 4.5-meter tall structures. I designed the electronics, communications system, embedded software, and the higher-level user interfaces.

Digital Construction Platform

At the MIT Media Lab, with support from X (formerly Google X), I worked with Steve Keating and Julian Leland to develop a large-scale robot platform for on-site construction tasks. I developed the software and controls systems used. You can find our paper in the Research section.

Multi-robot Path Planning

While at Penn for my masters, I studied path planning in single- and multi-robot settings. Here are some algorithms I implemented either for research or course projects.

M* for nonholonomic robots with ROS/Gazebo (Wagner et al.):
M* is a search-based planning algorithm for multi-robot settings, based on A*, M* guarantees optimal paths, and minimizes expansions from A* by only expanding states when collisions are detected. I extended it here for non-holonomic settings.

HRVO using CUDA (Snape et al.):
HRVO (Hybrid Reciprocal Velocity Obstacles) is a method to model moving obstacles, new velocities are chosen that lie outside obstacle boundaries for collision avoidance. I implemented it for GPUs using CUDA. More details at:

Penn FSAE - Red and Blue Racing

As an undergrad, I was on the Penn Formula SAE team (site not up to date) where I worked on the electronics and eventually was the subteam lead for the 2011 season. Our team consisted of Ryan Kumbang, Karan Desai, Nisan Lerea, and myself (+tons of help from alums and help from Keith and Praveer). Our team had historically one of the most custom electronic systems (self made CAN, boards, etc...). I was in charge of making sure our shift (pneumatics), clutch (linear actuator), ECU, etc...operated smoothly.

In order to save on weight and space, we condensed all the major electronics components besides the motors/pistons/steering wheel into a little box on the back of the car.

Most of our electronics system is based off of the same CAN-based technology used in Modlab on the CKbots to make it modular. These board were designed in Eagle (PCB CAD software) and shipped to 4PCB for printing. The overall system relies on "brain" boards and "utility" boards. "Brain boards" as shown below consist of a PIC microcontroller for logic and a CAN connector for bus communications. Each "brain" board is then paired with a secondary board underneath to provide it with specific capabilities, such as clutch, steering wheel, shifting, and overall system logic for the car. In total 8 boards were used on the car that were simple to swap out in case of failure or if additional features were needed.

Some test footage of our car:

Modlab: CKbots

I worked in Modlab for several years developing software for their CKbots. The CKbots are modular robots that allow us to create arbitrary robot configurations (2 examples below from the Modlab site). Our software was an API designed to enable developers to create applications and behaviours for the CKbots. CKbots could either be controlled individually or in "clusters", for instance, one could specify a cluster of 3 robots to be an "arm" and have the arm follow a predefined motion rather than controlling individual CKbot motions:

I worked on earlier versions of the CKbots that communicated via CAN-bus, but required manual screw connections. There were some versions that used magnetic faces to connect and disconnect automatically, but I did not interface with those versions.

I implemented a series of high- and low-level software interfaces in Linux for the CKbots, primarily in Python and C (for the embedded portions) and worked closely with Dr. Shai Revzen and Jimmy Sastra. Enabled specification of gaits, combining gaits, monitoring CAN bus, etc... For more general information about the CKbots:

The communication between modules was based off of the Robotics Bus Protocol detailed here.

RoboCup: Upennalizers

In undergrad I was also a member of the UPennalizers RoboCup SPL team. The RoboCup SPL league is an international competition where ~24 teams compete in robotic humanoid soccer using a standard platform, the Naos.

Here I worked primarily on the vision systems for the Nao robots, making sure they could see the (correct) ball, localize, etc. I implemented features such as multi-line detection using Hough transforms and horizon detection using odometry. I was the vision lead for the 2010 competition in Singapore where we made it to the quarter-finals (and beat many of the other top teams!) See a sample goal from the competition below.

For more information about the team and competition visit our site here.

For more information about our software system read our report.


For my final project in my mechatronics class I worked with Michael Posner and Sydney Jackopin to build 3 hockey-playing robots. To find out more competition details check out We made it to the quarter finals. I primarily worked on the entire electrical portion, but also helped with parts of the mechanical and software portions as well. Here's a compilation of our matches:


For a final project in our intro. to CAD course I modelled a toy transformer jet in SolidWorks that could actually transform in the model as well. Here's a blow up video of the model:

IBM: Mobile Systems Management

I spent ~1.25 years at IBM: Systems and Technology group as a software engineer developing mobile applications for server management, specifically for the Flex Systems offering. Using Dojo and Cordova we built a JavaScript-based "native" application that ran on iOS, Android, and BlackBerry phones and tablets. The app allowed users to monitor and control (turn on/off/reboot) their servers. The Flex group has since been sold to Lenovo, but here are some screenshots of the product.


I spent a year in NYC working at a small startup to help working professionals find continuing education opportunities. I worked on both front-end and back-end components using Ruby on Rails, AWS, ElasticSearch, AngularJS, etc. Below are some screenshots of the tutors-version of our product (there were many previous iterations as well).

Search Engine

As a final project for our web systems course, on a team of 4, we built a full text-based search engine in Java based on the original Google paper for queries/retrieval/ranking, Mercator for distributed crawling, and using Pastry as the distribution layer and BerkeleyDB for storage. Our system was distributed across ~10 AWS servers for all portions. The report can be downloaded here.

SLAM for Humanoid Robot (THOR)

For my learning in robotics course (ESE-650) we implemented a particle filter based SLAM (Simultaneous Localization and Mapping) algorithm using data collected from LIDAR mounted on the head and gyro and odometry readings of a humanoid robot (THOR) used in the DARPA Robotics Challenge. Interesting challenges arise from the inherent "rocking" motion of a bipedal walking robot, making map scoring and/or compensation using gyro/odometry readings even more important.

Face Detection

For my computer vision course final project I worked on a face detection algorithm using HoG features and SVMs. The overall training algorithm works by parsing HoG (histogram of gradients) in 8x8 non-overlapping sliding windows, 64x64 images, image intensity gradients are then separated into histogram bins, and all these are used as features in a linear SVM with an RBF kernel. These are trained with 811 faces from LFPW dataset (Faces in the Wild) and 3000+ negative examples.

Image Stitching/Mosaic/Panorama

Also in computer vision, another project was to stitch a panorama. We use harris corner detector for features. Overall procedure is to use ANMS (Adaptive non-max suppression) to filter out weak corners. Then using neighborhoods of corners, we compute corner match alignments in order to then compute homographies and use RANSAC to remove corner outliers. Once a best homography is found, we then apply the homography to all images other than a selected base image and merge all images on a large canvas.

GPU-based Graphics Pipeline and Path Tracer

During my masters at Penn, I took a GPU programming course and worked on a GPU-based graphics pipeline and path tracer in CUDA. For more informaton checkout my github page for descriptions and performance analysis. Here for the pipeline and here for the path tracer.

DARPA Learning Applied to Ground Robots (LAGR)

While in high school I joined the Intelligence in Action Lab at the University of Colorado, Boulder (CU) for 2 summers, supervised by Prof. Gregory Grudic. During the first summer I worked on optimizing/speeding up vision code for the DARPA LAGR robot (pictured above on the left). The goal of the DARPA LAGR competition was to push robot perception and navigation in large outdoor environments. All teams were provided with the same LAGR vehicles whose primary means of sensing were 2 fixed stereo-cameras, a GPS, accelerometer, and encoders.

During my second summer in the group, I build a small-scale and cheaper platform for future lab research (pictured above on the right). The base platform was purchased from I then added wheel encoders, motor drivers, compass, and GPS units as well as a microcontroller from BrainStem. I also implemented an API for interfacing higher-level code with the robot in order to control robot motion and retrieve sensor information.

For more information on the DARPA LAGR competition please see the wiki page.