While exploring the use of regular cameras for the development of the L-Trackers, we identified three major issues. The first one was related to the fact that grabbing a single frame was computationally more expensive than expected being hard to reach real-time with a moderated resolution and unable to work with high-resolution images (4 k resolution) at a large framerate (e.g., 60 frames per second). The second issue was that our solution was energy consuming even when there was no L-Beacon detected as the pipeline to detect the L-Beacons from computer vision needed to process every frame. The third one was a synchronization problem between the cameras and the code emitted by the L-Beacons, driving to a significant drop in correlation (around 20%) even in the most favourable conditions.
We have improved the other positioning technologies already available and developed by the researchers from the University of Minho, such as the collection an extensive BLE5 dataset in different scenarios and its analysis
The devised intelligent sensor fusor explores the concept of Hyperspectral images to fuse information coming from different sources. An approach to represent similarities based on an image representation in Wi-Fi fingerprinting was initially explored while/after submitting ORIENTATE’s proposal. In particular, this first approach was defined in TrackInFactory, where the similarity between an operational fingerprint with respect to the radio map was represented as an gradient image.
Within ORIENTATE, we have gone a step further and provided an image-based representation to represent the probability that the current position belongs to each cell within the area using a k-NN method.
To define an evaluation setup for optical-based positioning, we equipped a laboratory with 3 BSpoters (L-Trackers in ORIENTATE’s proposal) with regular cameras and collected two sets of dataset. For the second dataset, we installed two additional BSpoters with IR-sensitive cameras. Datasets will be released soon, stay tuned!