The hypernetworks are a theoretical model of learning and memory inspired by molecular computation (Zhang et al., DNA10-2004 & DNA12-2006). The entire network is represented as a test tube of DNA molecules, where each DNA molecule encodes a hyperedge of the hypernetwork. The thesis underlying the hypernetwork model is that the self-organizing process simulating the evolution of a high-density soup of massively interacting molecules is a powerful
tool for making complex associative memory. This method does not require long DNA sequences, allows for tolerance to errors in chemical reactions, and thus is within the reach of current DNA technology. The learning algorithm evolving the hypernetworks is based on the primitive molecular operations of variation (through combinatorial chemistry), selection (gel electrophoresis), and amplification (polymerase chain reaction). The size of the DNA library is typically of O(1015). Implementation of the hypernetworks
in wet DNA computing is in progress.
We have recently performed extensive simulations of the molecular hypernetworks to characterize its strengths and potential limitations in practical applications. As a machine learning model, the hypernetwork has several interesting properties. It can capture the higher-order interactions between features, which is useful for knowledge discovery. The hypergraph structure can be converted into association rules and thus comprehensible to domain experts.
In biological pattern discovery, for example, the hyperedges allow for biologists to discover building blocks such as network motifs and modules. In addition, a hypernetwork achieves high accuracy and generalization performance since it builds a weighted ensemble structure of a large number of hyperedges.
The promising results of computer simulations of molecular evolutionary learning of hypernetworks and the difficulty of large-scale simulations lead us to build a customized silicon chip to accelerate the simulation. We designed and fabricated a hypernetwork model in FPGA in collaboration with Inha University. The chip¡¯s performance is being benchmarked on image pattern recognition and cancer diagnosis data. We found the hypernetwork model suitable
for hardware implementation in memory chip technology since it ¡°learns¡± by primitive memory access and matching processes, such as ¡°molecular¡± operators of hybridization, selection, and amplification, thus not requiring high-precision numerical calculations. We are interested to know how far this technology can be pushed to solve large-scale problems, especially in biological data mining, pattern classification, multimedia information retrieval, and web data mining.