Abstract
Abstract
This work comprehensively analyzes the error robustness of hyperdimensional computing (HDC) by using FeFET-based local multiply and global accumulate computation-in-memory. HDC trains and infers with hypervectors (HVs). Symmetric or asymmetric errors, which simulate read-disturb and data-retention errors of FeFET, are injected into Item memory and/or Associative memory before/after or during training in various cases when solving European language classification task. The detailed error injection reveals that HDC is acceptable for both symmetric and asymmetric error rate up to 10−1. Based on the detailed analysis of error robustness, training window slide (TWS) improves the error robustness against memory errors by removing data which contain different amount of errors. TWS shows 10 times higher error robustness. In addition, parallelization of HV encoding in training achieves fast training with up to 10 000 parallelism while maintaining the inference accuracy.
Subject
General Physics and Astronomy,General Engineering