Projects

Below are some projects that I've worked on, in no particular order. Apologies for the long list format/large photos and hence super long load times. I haven't had a chance to make this performant (yet!).

Semi-supervised Tracking of Marine Animals with Autonomous Underwater Vehicles

As part of my PhD, I am working with Dr. Yogesh Girdhar to use computer vision and machine learning methods to enable AUVs to monitor marine habitats and track arbitrary marine animals, without tags, to study their behavior. To find out more, please visit the research section of my website or visit out lab website. This work was also featured on Nvidia's blog. For a short video of one of our platforms and field work:

Co-robotic Exploration and Curiosity-driven Machine Learning

Over the summer, I was a guest student at the WARP Lab in the Woods Hole Oceanographic Institute. I worked on extending the ROST (Real-time Online Spatiotemporal Topic modelling) algorithm to work in mapping contexts. I also developed some basic controllers for the BlueROV and SlickLizard (custom surface vehicle developed in the Warp lab).

Fiberbots

I have been working with Markus Kayser, Sara Falcone, and Nassia Inglessis to create a multi-robot system for construction, specifically making fiberglass composite, tubular structures. These robots are centrally controlled and are able to climb on their own structures. We built 20 robots from scratch that in turn produced 4.5-meter tall structures. I designed the electronics, communications system, embedded software, and the higher-level user interfaces.

Digital Construction Platform

At the MIT Media Lab, with support from X (formerly Google X), I worked with Steve Keating and Julian Leland to develop a large-scale robot platform for on-site construction tasks. I developed the software and controls systems used. You can find our paper in the Research section.




Multi-robot Path Planning

While at Penn for my masters, I studied path planning in single- and multi-robot settings. Here are some algorithms I implemented either for research or course projects.

M* for nonholonomic robots with ROS/Gazebo (Wagner et al.):
M* is a search-based planning algorithm for multi-robot settings, based on A*, M* guarantees optimal paths, and minimizes expansions from A* by only expanding states when collisions are detected. I extended it here for non-holonomic settings.


HRVO using CUDA (Snape et al.):
HRVO (Hybrid Reciprocal Velocity Obstacles) is a method to model moving obstacles, new velocities are chosen that lie outside obstacle boundaries for collision avoidance. I implemented it for GPUs using CUDA. More details at: https://github.com/arizonat/CIS565-FinalProject



Penn FSAE - Red and Blue Racing



As an undergrad, I was on the Penn Formula SAE team (site not up to date) where I worked on the electronics and eventually was the subteam lead for the 2011 season. Our team consisted of Ryan Kumbang, Karan Desai, Nisan Lerea, and myself (+tons of help from alums and help from Keith and Praveer). Our team had historically one of the most custom electronic systems (self made CAN, boards, etc...). I was in charge of making sure our shift (pneumatics), clutch (linear actuator), ECU, etc...operated smoothly.

In order to save on weight and space, we condensed all the major electronics components besides the motors/pistons/steering wheel into a little box on the back of the car.

Most of our electronics system is based off of the same CAN-based technology used in Modlab on the CKbots to make it modular. These board were designed in Eagle (PCB CAD software) and shipped to 4PCB for printing. The overall system relies on "brain" boards and "utility" boards. "Brain boards" as shown below consist of a PIC microcontroller for logic and a CAN connector for bus communications. Each "brain" board is then paired with a secondary board underneath to provide it with specific capabilities, such as clutch, steering wheel, shifting, and overall system logic for the car. In total 8 boards were used on the car that were simple to swap out in case of failure or if additional features were needed.

Some test footage of our car:


Modlab: CKbots



I worked in Modlab for several years developing software for their CKbots. The CKbots are modular robots that allow us to create arbitrary robot configurations (2 examples below from the Modlab site). Our software was an API designed to enable developers to create applications and behaviours for the CKbots. CKbots could either be controlled individually or in "clusters", for instance, one could specify a cluster of 3 robots to be an "arm" and have the arm follow a predefined motion rather than controlling individual CKbot motions:

I worked on earlier versions of the CKbots that communicated via CAN-bus, but required manual screw connections. There were some versions that used magnetic faces to connect and disconnect automatically, but I did not interface with those versions.

I implemented a series of high- and low-level software interfaces in Linux for the CKbots, primarily in Python and C (for the embedded portions) and worked closely with Dr. Shai Revzen and Jimmy Sastra. Enabled specification of gaits, combining gaits, monitoring CAN bus, etc... For more general information about the CKbots: modlabupenn.org/ckbot.

The communication between modules was based off of the Robotics Bus Protocol detailed here.


RoboCup: Upennalizers



In undergrad I was also a member of the UPennalizers RoboCup SPL team. The RoboCup SPL league is an international competition where ~24 teams compete in robotic humanoid soccer using a standard platform, the Naos.

Here I worked primarily on the vision systems for the Nao robots, making sure they could see the (correct) ball, localize, etc. I implemented features such as multi-line detection using Hough transforms and horizon detection using odometry. I was the vision lead for the 2010 competition in Singapore where we made it to the quarter-finals (and beat many of the other top teams!) See a sample goal from the competition below.

For more information about the team and competition visit our site here.

For more information about our software system read our report.


Robockey



For my final project in my mechatronics class I worked with Michael Posner and Sydney Jackopin to build 3 hockey-playing robots. To find out more competition details check out www.robockey.com. We made it to the quarter finals. I primarily worked on the entire electrical portion, but also helped with parts of the mechanical and software portions as well. Here's a compilation of our matches:


IBM: Mobile Systems Management

I spent ~1.25 years at IBM: Systems and Technology group as a software engineer developing mobile applications for server management, specifically for the Flex Systems offering. Using Dojo and Cordova we built a JavaScript-based "native" application that ran on iOS, Android, and BlackBerry phones and tablets. The app allowed users to monitor and control (turn on/off/reboot) their servers. The Flex group has since been sold to Lenovo, but here are some screenshots of the product.


MasterStreet

I spent a year in NYC working at a small startup to help working professionals find continuing education opportunities. I worked on both front-end and back-end components using Ruby on Rails, AWS, ElasticSearch, AngularJS, etc. Below are some screenshots of the tutors-version of our product (there were many previous iterations as well).




DARPA Learning Applied to Ground Robots (LAGR)

While in high school I joined the Intelligence in Action Lab at the University of Colorado, Boulder (CU) for 2 summers, supervised by Prof. Gregory Grudic. During the first summer I worked on optimizing/speeding up vision code for the DARPA LAGR robot (pictured above on the left). The goal of the DARPA LAGR competition was to push robot perception and navigation in large outdoor environments. All teams were provided with the same LAGR vehicles whose primary means of sensing were 2 fixed stereo-cameras, a GPS, accelerometer, and encoders.

During my second summer in the group, I build a small-scale and cheaper platform for future lab research (pictured above on the right). The base platform was purchased from superdroidrobots.com. I then added wheel encoders, motor drivers, compass, and GPS units as well as a microcontroller from BrainStem. I also implemented an API for interfacing higher-level code with the robot in order to control robot motion and retrieve sensor information.

For more information on the DARPA LAGR competition please see the wiki page.