Author:
Lee Ryan C.,Corsano Ariel,Tseng Chung Yi,Chou Leo Y. T.
Abstract
ABSTRACTDeep learning algorithms, such as neural networks, enable the processing of complex datasets with many related variables, and have applications in disease diagnosis, cell profiling, and drug discovery. Beyond its use in electronic computers, neural networks have been implemented using programmable biomolecules such as DNA. This confers unique advantages such as greater portability, ability to operate without electricity, and direct analysis of patterns of biomolecules in solution. Analogous to past bottlenecks in electronic computers, the computing power of DNA-based neural networks is limited by the ability to add more computing units, i.e. neurons. This limitation exists because current architectures require many nucleic acids to model a single neuron. Each addition of a neuron to the network compounds existing problems such as long assembly times, high background signal, and cross-talk between components. Here we test three strategies to solve this limitation and improve the scalability of DNA-based neural networks: (i) enzymatic synthesis to generate high-purity neurons, (ii) spatial patterning of neuron clusters based on their network position, and (iii) encoding neuron connectivity on a universal single-stranded DNA backbone. We show that neurons implemented via these strategies activate quickly, with high signal-to-background ratio, and respond to varying input concentrations and weights. Using this neuron design, we implemented basic neural network motifs such as cascading, fan-in, and fan-out circuits. Since this design is modular, easy to synthesize, and compatible with multiple neural network architectures, we envision it will help scale DNA-based neural networks in a variety of settings. This will enable portable computing power for applications such as portable diagnostics, compact data storage, and autonomous decision making for lab-on-a-chips.
Publisher
Cold Spring Harbor Laboratory