Abstract
Extracting logic knowledge is an important part of natural language understanding. This study introduces a new dataset for the purpose of measuring the model's ability to transfer natural language to logic form. Since logic represents how various concepts infer and reason from/to each other, it is essential to learn logical structure in any form of language while trying to comprehend its meaning of it. There have recently been many interests in deploying machine learning and deep learning methods for extracting logical knowledge and representing logic from natural language. The promising generative model that applied to recent sequence labeling work is used in the task of representing logic. Experiments have shown that generative methods can extract logic form from natural language texts. To be more specific, a sequence-to-sequence model is quite a convenient option when it deals with extraction tasks. The model provided in this study would be inspiring for future studies to work out more efficient solutions.
Publisher
Darcy & Roy Press Co. Ltd.