All-in-One Urban Mobility Mapping Application with Optional Routing Capabilities
December 2018 – IEEE
A collaboration between the University of Tennessee at Chattanooga (UTC) and the Georgia Tech Research Institute (GTRI) to create a mobile application that combines visual data extracted from cameras on roadway infrastructure with a user’s coordinates via a GPS-enabled device to create a visual representation of the driving or walking environment surrounding the application user. By merging the concepts of computer vision, object detection, and mono-vision image depth calculation, this application is able to gather absolute Global Positioning System (GPS) coordinates from a user’s mobile device and combine them with relative GPS coordinates determined by the infrastructure cameras and determine the position of vehicles and pedestrians without the knowledge of their absolute GPS coordinates. The joined data is then used by an iOS mobile application to display a map showing the location of other entities such as vehicles, pedestrians, and obstacles creating a real-time visual representation of the surrounding area prior to the area appearing in the user’s visual perspective. Furthermore, a feature was implemented to display routing by using the results of a traffic scenario that was analyzed by rerouting algorithms in a simulated environment. By displaying where proximal entities are concentrated and showing recommended optional routes, users have the ability to be more informed and aware when making traffic decisions helping ensure a higher level of overall safety on our roadways. This vision would not be possible without high-speed gigabit network infrastructure installed in Chattanooga, Tennessee, and UTC’s wireless testbed, which was used to test many functions of this application. This network was required to reduce the latency of the massive amount of data generated by the infrastructure and vehicles that utilize the testbed; having results from this data come back in real-time is a critical component. (DOI: 10.1109/BigData.2018.8622030)
Data-fused Urban Mobility Applications for Smart Cities
August 2018 – UTC Master’s Thesis
M.S. Computer Science master’s thesis; A thesis pertaining to previously published See-Through Technology and upcoming All-in-One Urban Mobility Application (AIO) submitted to the faculty of the University of Tennessee at Chattanooga in partial fulfillment of the requirements of the degree of Master of Science.
Enhancing Driver Awareness Using See-Through Technology
April 2018 – SAE International
This paper will present a mobile application that combines visual data extracted from cameras on roadway infrastructure with a user’s coordinates via a GPS-enabled device to create a visual representation of the driving or walking environment surrounding the application user. By merging the concepts of computer vision, object detection, and mono-vision image depth calculation, this application is able to gather absolute Global Positioning System (GPS) coordinates from a user’s mobile device and combine them with relative GPS coordinates determined by the infrastructure cameras and algorithm, The joined data is then used by an iOS mobile application to display a map to the user showing the location of other entities such as vehicles, pedestrians, and obstacles creating a real-time visual representation of the surrounding area. Furthermore, a feature was implemented to display optional routing by using the results of a traffic scenario that was analyzed by rerouting algorithms in a simulated environment. By displaying where proximal entities are concentrated and showing recommended optional routes, users have the ability to be more informed and aware when making traffic decisions helping ensure a higher level of overall safety on our roadways.
See-Through Technology Using V2X Communication
November 2017 – ACM Mid-Southeastern Conference
In recent years, we have seen an explosion in research and design pertaining to autonomous vehicles. As a result, there are growing safety concerns for drivers, passengers, and pedestrians. To improve the safety of autonomous vehicles, we put our research effort on connected autonomous vehicles that combine automation with connectivity. Following this concept, we explore computer vision (e.g., object detection) and Vehicle-to-C (V2X) communication, which includes Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication, to present an experiment about see-through technology which can assist the vehicle to have a real-time augmented view of traffic scene that in reality may be blocked by the vehicle in front.