14 Sep 2021
I originally wrote a rough draft of this post on my flight from Philadelphia to California on 17 June 2021, but never got around to publishing it until recently.
Waiting in anticipation of my flight from Philadelphia to California!
It’s taken me a lot longer to write this post than I wanted to. It’s harder for me to reflect on my experience as a ROBO MS student at UPenn as it typically would be as more than half of it was in a time of untold uncertainty. The pandemic uprooted campus life entirely and and it’s not possible for me to view the masters experience without it.
Compared to the three online semesters I’ve had, I would unequivocally say that my first semester was the best one. I never took a single class for granted. There were several moments in my first semester where I used to pinch myself when walking to class on Locust Walk, thinking to myself that I had finally arrived at a utopia where I could study just what I wanted to and had all the resources to do anything I could imagine. It was the realization of a dream I wrote about back in my first year of undergrad. But midway through the second semester, the image of the utopia I had in mind was shattered once it became clear that COVID would become a major factor in all the decisions I would have to make to salvage whatever I could from the degree - from the courses I could take to the labs I could choose to work with. There was a lot of variation in my workload during the online semesters, for I worked part-time for a lab during the summer of 2020, worked full-time for a professor’s startup while studying part-time in my third semester, and studied full-time in my final semester.
But now, on the other side of the MS degree, I am in a position I would have been thrilled to be in two years ago. I am very grateful that I was able to graduate with a GPA I can be proud of and with an exciting career in robotics at NVIDIA’s autonomous vehicles team. The road to finishing the masters degree was long and difficult, and at some points in 2020, it felt like there wasn’t a road to go on at all. Despite that, things have fallen into place for me remarkably well. And now, writing this blog post on my first business trip, I feel like I have made it, though not in a way I could have expected.
I read an article last year in the early days of COVID which said that the final stage of the grieving process is looking for meaning during times of hardships and tragedy. In a time when COVID has taken something from everyone, the only thing that one can gain is some of justification of all the struggles of a very different year. Personally, it has shown me what really matters and what things are worth worrying about. From as long as I can remember, I have been obsessively worried about things going wrong despite every indication that they weren’t. It wasn’t always a bad thing because the image of failure in my head pushed me to work harder. But when COVID became an emergency and the health of people I cared about became a primary issue, it made the problems I faced during my first semester seem a lot less significant. I still shudder when I think back to the long periods of isolation back in the summer of 2020. It was a rude awakening that I should have tried building a better social life before the pandemic, because there was no way to establish one with the pandemic raging. I was very fortunate that I had a very supportive friends circle that I could reach out to whenever I needed a pep talk during that time.
I have started seeing changes in myself as well. On the positive side, I am much more ready to go out and interact with new people than I ever have been. In some ways, it’s just the phase I want going into a new city. On the other hand, the solitude I enjoyed so much in 2019 now makes me uncomfortable and it reminds me of the long periods of sitting alone in 2020. A side effect of this is that I don’t want to tinker or work on side projects as much. That was also the case during the two years of masters’ as well since my coursework more than whetted my appetite for wanting cool things to work on.
Deja vu and excitement on seeing the Bay Area after years
I feel like there’s tremendous opportunity even before starting my new job at NVIDIA. I was exhilarated on getting the offer for this job nearly a year before the joining date. It completely turned my terrible summer around as the security of having a solid job offer in hand can not be understated when everything else seemed so uncertain last year. Though I have had some memorable internship projects in the past, I have never worked at a ‘real’ company before, so I am excited to see how an engineering team works at such a large organization. I feel like the last two years have been a build-up for this job and I’m really looking forward to actually applying what I learned during my degree to a business application. Most of my robotics coursework was either completely theoretical or only practically implemented in simulation. My brief experience of deploying robotics algorithms on a physical robot showed me that it is almost always is a frustrating experience. It is a necessary rite of passage to move out of simulation land for any robotics engineer and if all goes well, I should have plenty of opportunities to do that in this role.
I am also looking forward to moving out of Philadelphia. I’ve made many great memories during my two years in the city, but living in West Philadelphia was never a comfortable experience. Besides the general sense of urban decay to the north of Market and west of 40th St, I absolutely loathed the five months of freezing weather and early sunsets forcing everyone inside. From whatever I’ve heard about the Bay Area, I feel like life there will suit me much more than any other place I’ve lived before. I expect there’ll be a learning curve of getting used to this new phase of life, but I don’t expect it to be as steep as the ones I’ve had in the past when moving to a new place.
11 Sep 2021
The summer of 2021 was glorious and one of my first trips this summer was a very memorable visit to the Cherry Springs State Park, located in the middle of rural Pennsylvania. Situated on top of a mountain and hundreds of miles away from any major city, Cherry Springs is a world famous stargazing spot frequented by many hobbyist and professional astronomers with camping grounds booked for months on end. On a clear night one might expect to see hundreds (if not thousands) of stars, comets, Messier objects, and even a glimpse of the Milky Way galaxy. Unfortunately, clear nights are hard to come by at Cherry Springs with only 80-90 clear nights in a year. Coupled with the fact that the best time to go is during a new moon, it can be tricky to plan a trip to the park more than a few nights in advance. I highly recommend checking out the sky conditions for stargazing at ClearDarkSky. Ideally it should be as dark as possible with minimal cloud cover.
Cherry Springs Park has multiple astronomy sites, but unless you book in advance, you will most likely end up going to the Night Sky Public Viewing area near the campgrounds. The Public Viewing area is a massive field where one can setup their telescopes and cameras.
Our trip to Cherry Springs Park started from Philadelphia and was only really set in motion at the last minute after finding a rental car and a party of seven people who would didn’t mind driving out 500 miles to spend the night outside. Our drive to CSP took us through several towns and villages once we got off the highway. The last leg of the trip is particularly challenging with winding mountain roads and several deer prancing across. As expected of a mountain road near a stargazing site, there are no streetlights and the surroundings are pitch dark. We had to cautiously make our way up the mountain with the car’s high-beams on the whole time. Thankfully, the drive was uneventful and we reached the park at about 1130pm, giving us several hours before dawn.
On stepping out of the car, I felt the frigid mountain air on my face. While it was a balmy 25C in Philadelphia, the temperature on top of the mountain hovered around 0C. I was silently hoping that the sleeping bag that I brought with me for the trip would be up to the task of protecting me from the elements for the night. I took a quick glimpse of the night sky, and even though my eyes were still adjusting for the dark, I could see more stars than I had ever seen before in my life. We wrapped red wrapping paper (all that I could find) around our flashlights to make our way in the darkness without disturbing our night vision too much. After a ten minute walk, we found a suitable spot to setup our own astronomy camp in the middle of the Public Viewing area. With several bedsheets, two sleeping bags, some snacks, and a camera mounted on a tripod, we unpacked all our supplies for the night. I didn’t waste any time jumping right into the sleeping bag to stave off the biting cold.
Once my eyes adjusted to the darkness, I got my first ever look at the Milky Way galaxy - a sight I had been waiting for nearly ten years. I had seen many jaw-dropping photos of the Milky Way, and unfortunately, this is one of the times reality comes up short. The galaxy looks like a distant cloud behind hundreds of stars and I could only just make out the dim glow I had read so much about. That said, the photos do not do justice to the scale of the Milky Way, and it is an entirely different experience to see it stretch across the entire night sky. The sky was perfectly clear and there was no moonlight competing for our night vision either, so I wonder if it is even possible to see the Milky Way any more clearly than we did that day.
I tried using the Night Sky app on my phone to put a name to the stars and constellations, but I gave up after a few minutes of trying to connect the dots. Eventually, the cold lulled me to sleep.
I woke up to find the cold was completely unbearable. After a few more minutes of taking in the wonders of the universe, we made our way back to the comforts of the car to warm up at around 5am. As a fitting end to the night, I saw my second shooting star of the night on the walk back. I switched on the engine and let the heater warm up the interior for an hour before we felt well enough to make the journey back home. We took the same mountain road as we had the previous night and we were greeted by the sights of lush forests and rapidly flowing rivers early in the morning. I was thrilled to have checked off my teenage dream of seeing the Milky Way with my own eyes. It blows my mind that the same views we drove so far to find are always above us, but invisible because of the light pollution that comes with living in modern society. Cherry Springs has some of the darkest night skies in the world and I would recommend it to anyone with some interest in seeing the night sky as it used to be.
Our ride for the day - a brand new Infiniti QX80! Huge on the outside but suprisingly cramped inside.
Night Sky at Cherry Springs. The Milky Way galaxy is far more apparent with a camera than it is to the eye.
Milky Way from the Public Sky Viewing area.
Tired but happy on the return
03 Apr 2021
I am happy to announce that as of 905am, April 3 2021, I am part of a rapidly increasing COVID19 statistic - the number of people vaccinated from the disease!
The time in top-right corner of the vaccine document is the time at which I could safely leave after observing a 15 minute waiting period for side effects
I was able to get the Janssen/J&J single dose vaccine a bit earlier than expected by getting a callback from Walmart’s leftover vaccine waitlist and I am absolutely delighted to receive this particular vaccine. On the drive back from Walmart, I had Vampire Weekend’s Harmony Hall (a song which I had played on repeat for much of 2020) playing on the car’s stereo and I felt a surge of positivity that I had been missing for a long time.
I experienced some pretty strong side-effects from the J&J vaccine with chills, a headache, and a fever which reminded me of what bad viral fevers used to feel like after not having gotten sick in over a year. I had an ibuprofen when the symptoms started to get worse and by the next morning, the fever was gone. I was feeling a little weak for most of the day, but otherwise I felt pretty normal. It was definitely the strongest reaction I’ve ever had to a vaccine, but I have no complaints about it.
I think the biggest positive impact on being fully vaccinated in two weeks will be the added peace of mind in knowing that social decisions can be made with without the mental gymnastics we all used to think through in 2020. It might take a little longer for things to really open back up in the US, but till then, this vaccine will make the ride a lot smoother. Vaccines for my demographic and area are still hard to come by, but fortunately it looks like that will change quickly in the next few weeks when general eligibility opens.
27 Dec 2020
Disclaimer: Although I currently work at NVIDIA, I wrote this article before I joined in July 2021. All opinions in this post are my own and not associated with my company.
Over the past few months or so, I’ve had to work on getting pretrained deep learning (DL) models downloaded off GitHub to run for some robotics projects. In doing so, I’ve had to work with getting inference to run on an NVIDIA GPU and in this post, I want to highlight my experiences with working with ML modelling libraries and NVIDIA’s own tools.
What Works Well
With the growth in machine learning applications in the last ten years, it looks like there’s never been a better time for NVIDIA. The NVIDIA CUDA libraries have turned their gaming GPUs into general purpose computing monsters overnight. Vector and matrix operations are crucial to several ML algorithms, and these run several times faster on the GPU than they do on the fastest CPUs. To solidify their niche in this market, the 10- series and newer of NVIDIA GPUs come with dedicated hardware, such as the Deep Learning Accelerator (DLA) and 16-bit floating point (FP16) optimizations, for faster inference performance.
NVIDIA has developed several software libraries to run optimized ML models on their GPUs. The alphabet soup of ML frameworks and their file extensions took me some time to decipher, but the actual workflow of model optimization is surprisingly simple. Once you have a trained your PyTorch or Tensorflow model, you can convert the saved model or frozen checkpoint to the ONNX model exchange format. ONNX lets you convert your ML model from one format to another, and in the case of NVIDIA GPUs, the format of choice is a TensorRT engine plan file. TensorRT engine plan files are bespoke, optimized inference models for a given configuration of CUDA, NVIDIA ML libraries, GPU, maximum batch size, and decimal precision. As such, they are not portable among different hardware/software configurations and they can only be generated on the GPU you want to use for deployment.
When the TensorRT conversion works as it’s supposed to, you can expect to see a significant 1.5-3x performance boost over running the Tensorflow/PyTorch model. INT8 models might have even better performance, but this requires a calibration dataset for constructing the TensorRT engine. TensorRT is great for deploying models on the NVIDIA Xavier NX board, since you can also build TensorRT engines which run on the power efficient DLAs as opposed to the GPU.
What Needs Improvement
For all of NVIDIA’s good work in developing high performance ML libraries for their GPUs, the user experience of getting started with their ML development software leaves some to be desired. It’s imperative to double check all the version numbers of the libraries you need when doing a fresh installation. For instance, a particular version of
tensorflow-gpu only works with a specific point release of CUDA which in turn depends on using the right GPU driver version. All of these need to be installed independently, so there are many sources of error which can result in a broken installation. Installing the wrong version of TensorRT and cuDNN can also result in dependency hell. I am curious to know why all these libraries are so interdependent on specific point releases and why the support for backward compatibility of these libraries is so minimal. One solution around this issue is using one of NVIDIA’s several Docker images with all the libraries pre-installed. Occasionally, I also ran into issues with CUDA not initializing properly, but this was fixed by probing for the NVIDIA driver using
As an aside, I also found that the accuracy of the TensorRT engine models are subjectively good, but there is noticeable loss in accuracy when compared to running the pretrained Tensorflow model directly. This can be seen here when evaluating the LaneNet road lane detection model:
Tensorflow Frozen Checkpoint Inference (80 fps)
TensorRT engine inference in FP32 mode (144 fps) and FP16 mode (228 fps)
The results are subjectively similar, but there is a clear tradeoff in the accuracy of lane detection and inference time when running these models on the laptop flavour of the NVIDIA RTX 2060.
On the Tensorflow and ONNX side of things, the conversion from a frozen Tensorflow checkpoint file to an ONNX model was more difficult than I anticipated. The
tf2onnx tool requires the names of the input and output nodes for the conversion, and the best way to do this according to the project’s GitHub README is to use the
summarize_graph tool which isn’t included by default in a Tensorflow distribution. This means downloading and building parts of Tensorflow from source just to check the names of the input and output nodes of the model. Curiously enough, the input and output node names need to be in the format of
node:0, though appending the
:0 is not explicitly mentioned in the documentation. I also learned that the
.pb file extension can refer to either a Tensorflow Saved Model or Frozen Model, and both of these cannot be used interchangeably. Luckily, the experience becomes considerably less stressful once the ONNX file is successfully generated and Tensorflow can be removed from the picture. I also tried using the ONNX file with OpenVINO using Intel’s DL Workbench Docker image and the conversion to an Intel optimized model was a cinch. That said, the inference performance on an Intel CPU with the OpenVINO model wasn’t anywhere close to that on an NVIDIA GPU using a TensorRT model when using the same ONNX model.
I feel it’s important to make the setup procedure of DL libraries easier for beginners. The ONNX file format is a good step in the right direction to making things standardized and the optimized model generation software from a variety of hardware manufacturers has been crucial in improving inference performance. The DLAs found on the Jetson boards are quite capable in their own right, while using a fraction of the power a GPU would use. Other vendors have taken note with the Neural Engine debuting with the Apple A11 SoC. Intel has approached the problem at a different angle by adding AVX512 for faster vector operations using the CPU and a Gaussian Neural Accelerator (presumably comparable to a DLA?) with the 10th Gen mobile SoCs. If the last few years is anything to go by, advancements in DL inference performance per watt show no sign of slowing down and it’s not hard to foresee a future where a DLA in some shape and form becomes as standardized as the CPU and GPU on most computing platforms.
19 Dec 2020
The visions I had for grad school have not quite been realized during my time at UPenn.
The Statement of Purpose I used for my Masters’ admissions was filled with the experiences I had when working on projects on during undergrad. I emphasized that if given the chance, my goal was to advance the state of art in robotics through research. The sentiment was genuine and based on whatever experiences I had in undergrad, I was ready to take a lighter course during my masters to make more time for research. I had pretty much made up my mind to take the thesis option at Penn - granting one with the chance to do original research for two full semesters with a 20% reduction in course load to boot!
However, as I mentioned in my previous posts, my plans changed during my first year. I ended up taking time-consuming courses in my first and second semester which left little time to do research. That’s not to say it was for a lack of trying - for I did reach out to a couple of labs located at Pennovation Works, a business incubator and massive robotics research facility under one roof.
I had some good introductory conversations with the passionate PhDs and postdocs at these labs. One of the more notable discussions I had was when a postdoc at one of these labs asked me how I would solve one of the open research problems she was working on. Perplexed by the question, I said that I didn’t know enough to answer without reading some papers. After all, I had only just started approaching labs to pursue research and I wasn’t even sure where to begin till I got warmed up with the problem.
Ignoring my response, she repeated the question and asked me to try harder. When I was stumped for the second time, she said that I should try to pursue solving the problem with whatever experiences I had in my previous projects. It was a epiphanic moment for me, because till then I had always thought research would require substantial background and theory before building off existing work. It was eye-opening to imagine that solving a difficult problem with whatever I knew was a valid research direction in itself. I left that meeting feeling a lot more certain that I had come to the right place. Unfortunately, a couple of weeks after that meeting, COVID-19 caused the college and all research facilities to close for good. With the burden of courses and new lifestyle changes due to COVID, research was no longer at the top of my priority list and it looked like it would have to wait indefinitely.
In a strange twist of fate, it wasn’t for long. My summer internship ended up getting cancelled a few weeks before it was going to start, so to stay busy during the summer, I ended up taking a remote research project at a different lab from the ones I had approached earlier in the year. At first glance, the project I was handed looked like a good fit. It was about creating a autonomous system for quadcopters to avoid midair collisions. I felt like I could leverage some of my previous work to get new ideas. However, soon after I started, the cracks in what I knew started to become apparent. The existing work in the project involved using Model Predictive Control (MPC) and convex optimization for planning collision free trajectories. I only knew about MPC in passing and convex optimization was not a topic I could hope to cover in time for doing anything new in a twelve week project. I would end up spending weeks on what I felt was a novel approach, only to find that the team of PhDs had tried it out without much luck long before. My drawers filled with reams of printed research papers from other university labs working on the same problem. I began rethinking the lab’s existing work and it troubled me that the current research direction would only work for a very specific configuration of robots, which as anyone working in robotics knows, is rarely something useful for the real world. At some point, it was clear that the project wasn’t a good fit for me, so I dropped out of the lab soon after.
When it comes to original research, I feel master’s students are in a middling position as the research work they aspire to do often requires coursework which one only completes near the end of the degree. To that end, I suggest taking advanced versions of courses (such as Model Predictive Control or Adavance Computer Perception) early on. It is important to know the background which a particular research project requires, unless your idea is to solve the problem with what you already know. I also suggest prodding an existing research approach for weaknesses, as new insight can patch the holes in a current direction of the research project. It’s also best not to commit early to working on a project, at least not until you have a realistic plan of work for the duration you want to work.
My experience with research was less than ideal, but I would be curious to find out if others have been able to overcome the hurdles which come with trying out a project out of one’s depth. Do let me know about your experiences in the comments!