Arc II: Deus Ex Machina

Nanubala Gnana Sai
Geek Culture
Published in
5 min readJul 25, 2021

--

A lot has happened this past month. Seldom does it leave the writer time to collect his thoughts. Now that the first evaluation is past us, opportunity has presented itself for him to pick up the rusty pen again. First, a note of thanks.

I’ve had the pleasure of working with immensely talented: Marcus Edel, wise and courteous: James J. Balamuta, and ever helpful: Sayan Goswami. I’m thankful that they’re mentoring my endeavour. A special thanks to Shaikh Mohd. Fauz (Fauz), your work has undoubtedly breathed life into this project.

Episode 1: Portfolio Optimization

The reader may recall that there were attempts on applying Multi-objective optimizers on portfolio design tasks. They may also recollect that it was met with a disappointing end.

Our first achievement was fixing the bottleneck of the notebook. The algorithm took too long to compute Pareto Front. Thus generating Pareto Front for large populations was infeasible. Turns out, the objective function was performing an expensive operation over and over again. To tackle this, we cached the intermediate results. Voila! The notebook runs blazing fast now. We also incorporated the new MOEA/D-DE algorithm that solidified the point that MOEA/D-DE is better than NSGA-II.

Fig 1: The evolutionary process of the two algorithm for 30 generations.

For the curious, I have logged the comparison between the algorithms here. Further, I encourage the reader to run this notebook in a binder instance to satiate their curiosity.

Episode 2: New Release! Pacchis Din Mein Pesa Double

Depending on where you’re from, you’ll either find the title downright hilarious or utterly confusing. Let me clear the air for the latter, the ensmallen library has recently witnessed a spike in its contribution (thanks to, guess who? ;)). Time was ripe for a new release version. The mlpack library has a tradition of naming the release version in a hilarious fashion. Thus expectations were high, and I don’t plan to disappoint.

It hit me: “What better way to leave a mark in the most comedic way than to quote a meme from Phir Hera Pheri?”, and so it was. Ladies and gentlemen, I present to you, ensmallen library’s latest release: *drums rolling* “Pachis Din me Pesa Double

Fig 2: Akshay Kumar’s legendary dialogue from the movie “Phir Hera Pheri”.

Needless to say, everybody had a good laugh. It also received a lot of reactions (which I’m certainly proud of :D). You can download the latest release here.

Episode 3: A window of opportunity

The core TensorFlow Lite C++ API is known to be notoriously difficult to understand. The “C++” term is misleading. The coding style is mostly C-like with standard and abseil libraries sprinkled in. My first internship consisted of creating a framework to wrap this “wild” code under a safe and easy to use interface. Moreover, I was tasked with utilizing delegates like GPU, Hexagon to boost performance.

Enter: GSoC’21@TensorFlow. To my surprise, TFLite was tackling the same problem that I was doing as part of my internship. They were looking for someone to build the Pose Estimation Task. I immediately filled out the application, and evidently, it was rejected.

Regardless, I reached out to the org. to enquire if I could contribute to that project. I had the opportunity to discuss my ideas with Lu Wang, Senior Software Engineer @ Google. We decided on a schedule. We’re working on bringing Super Resolution Task into the library. It’s really exciting to be work in such a reputed library and be mentored by someone so able.

Episode 4: Fly me to the moon!

“It’s not rocket science, Jimmy!”, well, it kinda is. It was time to let the cat out of the bag. The task? Design the injector of a rocket system to maximize performance whilst minimizing thermal wear and tear.

A typical rocket system consists of two liquid propellants, a fuel and an oxidizer. The injector is responsible for the mixing of the propellants into tiny droplets for ready combustion. Aggressive mixing is good for performance but might meltdown the injector face, whereas inefficient mixing leads to suboptimal performance. Looks like a problem with conflicting objectives. My spidey sense tells me Multi-objective optimizers would be helpful here.

But wait! We’re not done yet. This notebook is one of the hallmarks of my GSoC project. I can’t let this one die ordinary. I drew up some lofty ideas to further enhance this project. One, render 3D interactive graphs to get a feel of the Pareto Front generated. James suggested bokeh should be used for this task. Two, simulate an interactive rocket injector within the Jupyter notebook itself. These are very bold ideas, and I’m really thankful to my mentors to empower and support my decision.

Since bokeh is a python library, we decided on using the Script of Script(SoS) library. It is a multi-kernel system that allows passing data between different languages. That means I could pass data from C++and render it using python and vice-versa.

I was not very sure how to go about the simulation part. I tried searching up libraries for this to no avail. Fortunately, Fauz was the perfect man for this task. He has a deep understanding of designing things and is proficient in Unity. So, he built a visualization library from scratch based on the specification. To collaborate with others to achieve a common goal, that’s the very spirit of the open-source culture. Originally Greek, “Deus Ex Machina” is a literary device that means an unexpected power saving a seemingly hopeless situation. I think it perfectly fits the context here.

Fig. 3: Visualization tool, built by Fauz

You can find the repository for the animation here and a demo application here.

Epilogue: The Last Minute Change

As per the original proposal, the next task at hand is developing SPEA-II algorithm. It functions very similar to NSGA-II with a better crowd control mechanism. It’s a bit boring to implement a similar algorithm, so I thought up of a change in my proposal.

Instead of SPEA-II, we planned on Multi-objective reinforcement learning. In a generic reinforcement learning task, the agent maximizes a reward by taking some action. In Multi-objective scenario the agent manages multiple reward functions simultaneously! Sounds exciting doesn’t it?

Stay tuned for more updates on this.

--

--