Sekai camera showed one of future augmented reality services for information retrieval from an environment. However, there were no mentions how it works. Here is another sekai camera demo. I think that the information is not attached to a specific object. Instead, the tags flows around the user's position and the user should select one of them to see what the others left there. As such, no recognition method is required. The position of the user is an important cue.