Lessons Learned from Historical Failures, Limitations and Successes of AI/ML in Healthcare and the Health Sciences. Enduring Problems, and the Role of Best Practices
Author:
Aliferis Constantin,Simon Gyorgy
Abstract
AbstractThis chapter covers a variety of cases studies-based incidents and concepts that are valuable for identifying pitfalls, suggesting best practices and supporting their use. Examples include: the Gartner hype cycle; the infamous “AI winters”; limitations of early-stage knowledge representation and reasoning methods; overfitting; using methods not built for the task; over-estimating the value and potential or early and heuristic technology; developing AI disconnected with real-life needs and application contexts; over-interpreting theoretical shortcomings of one algorithm to all algorithms in the class; misinterpreting computational learning theory; failures/shortcomings of literature including technically erroneous information and persistence of incorrect findings; meta research yielding unreliable results; failures/shortcomings of modeling protocols, data and evaluation designs (e.g., competitions); failures/shortcomings of specific projects and technologies; and also contextual factors that may render guidelines themselves problematic. These case studies were often followed by improved technology that overcame various limitations. The case studies reinforce, and demonstrate the value of science-driven practices for addressing enduring and new challenges.
Publisher
Springer International Publishing
Reference153 articles.
1. O’Leary DE. Gartner’s hype cycle and information system research issues. Int J Account Inform Syst. 2008;9(4):240–52.
2. Russell SJ. Artificial intelligence a modern approach. Pearson Education Inc; 2010.
3. AI Winter. Wikipedia. https://en.wikipedia.org/wiki/AI_winter
4. Marcus G (2022) Deep learning is hitting a wall. Nautilus, Accessed, pp. 03–11.
5. Minsky M, Papert S. An introduction to computational geometry. Cambridge TIASS., HIT, 479, p. 480; 1969.