Adobe Stock / Zarya Maxim

Design practitioners have become familiar with an array of evolving technologies such as virtual and augmented reality (VR/AR), artificial intelligence (AI), the internet of things (IoT), building information modeling (BIM), and robotics. What we contemplate less often, however, is what happens when these technologies are combined.

Enter the world of teleoperation, which is the control of a machine or system from a physical distance. The concept of a remote-controlled machine is nothing new, but advances in AR and communication technologies are making teleoperability more sophisticated and commonplace. One ultimate goal of teleoperability is telepresence, which is commonly used to describe to videoconferencing, a passive audiovisual experience. But increasingly, it also pertains to remote manipulation. Telerobotics refers specifically to the remote operation of semi-autonomous robots. These approaches all involve a human–machine interface (HMI), which consists of “hardware and software that allow user inputs to be translated as signals for machines that, in turn, provide the required result to the user,” according to Techopedia. As one might guess, advances in HMI technology represent significant potential transformations in building design and construction.

Fologram Talks: Holographic Brickwork from Fologram on Vimeo.

In one example, researchers at the University of Tasmania, in Australia, joined forces with local builder All Brick Tasmania to demonstrate the construction precision of a geometrically intricate brick wall. Using Fologram, an application that allows users to see CAD-based 3D modeling information superimposed over an actual view of a project site, bricklayers installed individual bricks to align with their digital counterparts, expediting a task that would otherwise require constant field measurement and verification. The finished wall is visually similar to the robotically fabricated panels by the Gramazio Kohler Research group out of ETH Zurich. In this case, however, the mason takes the place of the robot, augmenting their field experience with computer vision. Nevertheless, the Fologram tool anticipates a robot-led construction process in the future.

Circumstances that call for a remote human operator require telerobotics technologies, such as the one developed by the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. By linking a VR/AR platform to a robot arm, MIT researchers have enabled human users to direct robotic actions in real-time. Given that network speeds have not always been ideal for AR, scientists have had to choose between two suboptimal approaches. The direct model approach pairs the user’s vision with that of the robot. This method is telepresence in spirit, but network lag can make users nauseated. The second approach, the cyber-physical model, renders a virtual copy of a robot and its environment, similar to VR, where the physical reality remains unseen.

CSAIL researchers opted to develop a hybrid of these two strategies, pairing both views in a way that minimizes dizziness and allows users to calibrate the virtual and real perspectives. According to an October 2017 MIT press release, “The system mimics the homunculus model of the mind—the idea that there’s a small human inside our brains controlling our actions, viewing the images we see, and understanding them for us.” In the case of the robot, the human user effectively becomes the homunculus.

Tokyo-based company SE4 has created a similar telerobotics system that overcomes network lag by using AI to accelerate robotic control. Combining VR and computer vision with AI and robotics, SE4's Semantic Control system can anticipate user choices relative to the robot’s environment. “We’ve created a framework for creating physical understanding of the world around the machines,” said SE4 CEO Lochlainn Wilson in a July interview with The Robot Report. “With semantic-style understanding, a robot in the environment can use its own sensors and interpret human instructions through VR.”

Developed for construction applications, the system can anticipate potential collisions between physical objects, or between objects and the site, as well as how to move objects precisely into place (like the “snap” function in drawing software). Semantic Control can also accommodate collaborative robots, or “cobots,” to build in a coordinated fashion. “With Semantic Control, we’re making an ecosystem where robots can coordinate together,” SE4 chief technology officer Pavel Savkin said in the same article. “The human says what to do, and the robot decides how.”

Eventually, machines may be let loose to construct buildings alongside humans. Despite the significant challenges robotics manufacturers have faced in creating machines that the mobility and agility of the human body, Waltham, Mass.–based BostonDynamics has made tremendous advances. Its Atlas humanoid robot, made of 3D-printed components for lightness, employs a compact hydraulic system with 28 independently powered joints. It can move up to a speed of 4.9 feet per second. Referring to BostonDynamics’ impressive feat, Phil Rader, University of Minnesota VR research fellow, tells ARCHITECT that “the day will come when robots can move freely around and using AI will be able to discern the real world conditions and make real-time decisions.” Rader, an architectural designer who researches VR and telerobotics technologies, imagines that future job sites will likely be populated by humans as well as humanoids, one working alongside the other. The construction robots might be fully autonomous, says Rader, or “it's possible that the robot worker is just being operated by a human from a remote location.”

[Editor's Note: This story appears as it was originally published on our sister site Architect.]