Abstract
AbstractThe ability to switch between tasks effectively in response to external stimuli is a hallmark of cognitive control. Our brain can filter and integrate external information to accomplish goal-directed behavior. Task switching occurs rapidly and efficiently, allowing us to perform multiple tasks with ease. Similarly, artificial neural networks can be tailored to exhibit multi-task capabilities and achieve high performance across domains. In terms of explainability, understanding how neural networks make predictions is crucial for their deployment in many real-world scenarios. In this study, we delve into neural representations learned bytask-switchingnetworks, which use task-specific bias for multitasking. Task-specific biases, mediated bycontext inputs, are learned by alternating the tasks the neural network learns during training. By using the MNIST dataset and binary tasks, we find thattask-switchingnetworks produce representations that resemble other multitasking paradigms, namelyparallelnetworks in the early stages of processing andsequentialnetworks in the last stages, respectively. We analyze the importance of inserting task contexts in different stages of processing and its role in aligning the task with relevant features. Moreover, we visualize how networks generalize neural representations duringtask-switchingfor different tasks. The use ofcontext inputsimproves the interpretability of simple neural networks for multitasking, helping to pave the way for the future study of architectures and tasks of higher complexity.
Publisher
Cold Spring Harbor Laboratory