Augmented Reality Framework

This was my senior design project at CSU LA. It was commissioned by Jet Propulsion Laboratory (JPL).
This project deserves a permanent place on this website because of the team that I worked with. I loved everyone on the team.
Technology-wise, the project itself was an application of some classic computing concepts like computer graphics, motion sensors, camera/video capture devices, etc... into a new visualization concept called augmented reality (AR). The scope of the project was initially as a proof of concept to prove to JPL that current gen phone hardware could handle augmented reality visualization of their scientific data. The team was able to successfully incorporate the visualization of multiple hydrology data types that were available from JPL's hydrology database at the time, including: wells, reservoirs, and some rivers.
Google's ARKit, Apple's ARCore, and Unity were explored on the first phase in order to determine whether the development of a separate AR framework was necessary. The team concluded thus after research: Google's ARKit was very primitive at the time, and performance was bad on the few devices that it did support; Apple's ARCore performed well but required calibration (finding a flat surface) and did not allow for the flexibility of cross-platform development later on, so JPL had to make a decision on whether they would like to stick with Apple devices for deployment going forward; last but not least, Unity's AR support satisfied most of the requirements, but it had a licensing fee so widespread deployment would have costed a lot to maintain. Additionally, all three of the frameworks mentioned had one fatal drawback: they were not open-source, so future development and deployment were entirely up to the individual maintainers/developers of those platforms. As such, JPL will be completely at their mercy for developments that would last up to 10 years later or even longer.
So a framework was decided as the ultimate product that would be delivered. A separate demo app that would implement the framework as an example use case would also be developed in phase 2, after the basics of the framework have been developed and the framework itself was ready for initial deployment.
The team barely finished with both the framework and the demo app within the time frame allowed (2 school semesters, or approximately 8 months in total). Given more time to better flesh out the framework, we would have been able to polish it further, but under the circumstances, I feel that what we accomplished was the best we could have done, and I was very proud of the team for having pushed so hard. Completion of a framework, with a concrete architecture, and a demo app all within 8 months of development time, all without any major deal-breaking bugs, and just with a slight lack of features is remarkable! I have had some industry experience developing mobile apps up until then, and the amount of work was equivalent to building 2 separate apps that depended on one another. That was no small feat.
My thanks to Wilbert Veit (team leader), Ernesto Padilla, Kaicheng (Ivan) Zhou, and Christopher Hung Nguyen. You guys were by far the best team I have ever been a part of!

Mentionables:
The framework was developed primarily using Camera2, OpenGL ES 2.0, and Sensors APIs on the Android platform. JitPack was used to build each successive versions of the framework to be implemented by the demo app in phase 2, and it helped speed up the process immensely as it pulled directly from our Github repository.