When we started theorising about ARki back in in 2010, we imaged a parallel dimension of architecture and environments that ran along our physical reality, and seamlessly merged with our surroundings. In this reality, architects could easily superimpose their designs within their physical sites, visualise digital models on table tops, and experience walking through their designs in real time, without having to create physical prototype or models. Initially the AR technology available didn’t allow us to freely create these experiences without the use of 2d image trackers, such as site plans etc, to augment these digital models within the real-world. Moreover the ability to create large scale architectural propositions in AR was always limited to the use of industrial scale printed markers as trigger images to kickstart the augmentation.
Our current reality now however seems somewhat more optimistic. The current release of new AR sdks, released by Apple and Google, means that we can actually start to track physical locations, and superimpose our virtual data directly in-situ without the need for a single 2d image marker in sight. Oh how the possibilities become endless, we appear to be slowly catching up to the initial dream of seamless world augmentation, and with the release of ARkit and ARcore we are also one step closer to our once imagined augmented realities as demonstrated by the initial concept video for ARchi-Maton.
Welcome to Darf Blog.