Skip to main content

A baby bird must learn to fly in order to leave the nest. One way for them to learn is by example – they see their parent’s behavior, and they repeat it. If we had the right teacher, is it possible to get an origami bird to learn how to flap its wings? In other words, how can we teach inanimate objects to reproduce life-like behaviors?

In our recent study, we addressed some of these questions by expanding the paradigm of physical learning: an emerging field in science and engineering in which (meta)materials acquire desired behaviors by exposure to examples.

 

Figure 1a-d. Learning dynamic states. Figures 1a-b show the general strategy: during a training period (fig. 1a), a time-dependent state is imposed on the system. This is pictured by an origami bird whose wings are periodically moved by an external agent. After training (fig. 1b), a retrieval period during which the physical system evolves under the learned dynamics should lead to a dynamic state that matches the training as best as possible. (fig. 1c and 1d) Demonstration of the training and retrieval procedure using a programmable LEGO toy. The angular positions of two motors are imposed by hand during a training phase (fig. 1c), during which couplings between the motors are learned. Dynamics during retrieval, with learned couplings, can produce fixed points as well as dynamic states where the angles constantly change (fig. 1d). See this video for a detailed description.

We know biological systems adapt to their environment by changing their behavior in response to past events as captured by the saying, once bitten, twice shy.  In the brain, for instance, neurons wire together if they fire together. But the ability to learn is not limited to sentient or animate beings; it can emerge in natural physical processes. Inanimate systems can also evolve their microscopic interactions to effectively learn desired behaviors after experiencing examples of that behavior, a phenomenon referred to previously as physical learning. So far, physical learning has been applied to learn static properties, such as to achieving a material with desired mechanical properties or recalling certain self-assembled structures. In our recent article, we ask: how can a physical system learn time-dependent functionalities like pathways, trajectories, or dynamic states?

The essence of our approach is conceptualized schematically in Fig.1: an external agent first imposes a dynamic state that breaks time-reversal symmetry on the system during a period called training (e.g. moves the wings of the origami bird in Fig.1a). (Refer to the video demonstration). Note we say that a process has time reversal symmetry if a recording of the process looks similar whether it is played forward or in reverse. Next, during a period known as retrieval, the desired time-dependent state (e.g. wings flapping) can be recovered as the system evolves according to interactions learned during the training (Fig.1b).

In our paper, we identify the two ingredients needed to learn time-dependent behaviors that are common across all experimental platforms: (i) learning rules that are dependent on the history of the training process and (ii) exposure to examples that break time-reversal symmetry during training. After providing a hands-on demonstration of these requirements using programmable LEGO toys, we turn to realistic particle-based simulations (refer to the video demonstration). Instead of programming the training rules by hand, we explain how they emerge from simple physico-chemical processes involving the causal propagation of chemical fields released and sensed by these particles. This rich phenomenology is captured by a modified spin model amenable to analytical treatment.

Our research motivates exploring intelligent, trainable particles that can self-assemble into structures that move or change shape on demand, via the retrieval of the dynamic behavior previously seen during training. This strategy can not only be applied in robotic matter by directly programming a computer, but also in physico-chemical active matter systems where the process of learning naturally emerges. The principles illustrated here provide a step towards von Neumann‘s dream of engineering synthetic systems with life-like behaviors and shed light on how artificial life itself may originate from primitive components capable of adapting to their environment.

Study co-authors:

Rosalind Huang, Undergrad: James Franck Institute & Department of Physics, The University of Chicago

Michel Fruchart, CNRS researcher (CR): Gulliver, ESPCI Paris, Université PSL, CNRS

Pepijn G. Moerman, Assistant Professor: Chemical Engineering and Chemistry, Eindhoven University of Technology

Suriyanarayanan Vaikuntanathan, Professor: James Franck Institute and Department of Chemistry, University of Chicago

Arvind Murugan, Associate Professor: Department of Physics, The University of Chicago

Vincenzo Vitelli, Professor: James Franck Institute, Kadanoff Center for Theoretical Physics & Department of Physics, The University of Chicago

arrow-left-smallarrow-right-large-greyarrow-right-large-yellowarrow-right-largearrow-right-long-yellowarrow-right-smallfacet-arrow-down-whitefacet-arrow-downCheckedCheckedlink-outmag-glass