Skip to content

Calls for Papers

Call for Papers on the Interface Between Human Users and Machine Learning Models in Medical Decision Making

The term “machine learning” (ML) encompasses a broad range of techniques, algorithms, and models that can be used for prediction modeling. As Deputy Editor Lauren Cipriano notes in an editorial in the February 2023 issue of Medical Decision Making,[1] these techniques have many potential applications in medical decision making (MDM).

Medical Decision Making often receives manuscripts that report on the development and testing of ML algorithms to predict medical outcomes with the aim of assisting decision making.  While our mission does not include publishing papers about the statistical techniques and details of machine learning, we are interested in the interface between machine learning and actual decisions made by human beings.  After all, the potential value and risks of using ML tools to inform, guide, and sometimes perform decision making are always limited by the humans who design them and the humans who use them.

In this Call for Papers, we seek original research manuscripts presenting work at the interface between ML models in MDM and the people who might need to use those models to improve decisions and health outcomes. Relevant research may consider any decision maker, including clinicians, patients, policymakers, and/or the public more generally, as well as how to make predictive models more useful to those decision makers. Such work may touch on one or more of the following research domains:

  • Model explainability. In other words, how MDM ML model structure and characteristics are presented to users.
    • E.g., how can MDM ML models be described to enable users to understand how they work, the logic they follow, and to see them as more than an indecipherable “black box”?
    • E.g., what leads clinicians or patients to trust or mistrust ML risk prediction or life expectancy models?
    • E.g., What tests of accuracy are most useful?
  • Model usability. In other words, how MDM ML model inputs and outputs are structured to improve usability and understanding of model strengths and weaknesses.
    • E.g., do certain types of graphical outputs enable better understanding of ML model outputs by users, such as practicing clinicians or patients?
    • E.g., what factors affect user understanding of the limitations of ML model predictions?
  • Model bias and management of such bias. In other words, how MDM ML models may incorporate existing biases in the data system, structural and interpersonal biases in data sources, etc.
    • E.g., what approaches are effective at removing bias or correcting for it in MDM ML models?
  • Non-statistical criteria for designing and selecting MDM ML models. In other words, how model design may be affected by the preferences or capabilities of users.
    • E.g., are MDM ML models designed certain ways because they align with the expectations of users, and what are the implications of such choices?
  • Practical effects of ML model use or non-use.
    • E.g., in practice, how do ML models affect medical decisions made by humans? How are the results better or worse than the model alone or the human alone?

As a relevant example, Weyant and Brandeau (2022) recently presented in Medical Decision Making a ML-based meta-modeling method that could “be used to simplify complex models for personalization, allowing for variable selection in addition to improved model interpretability and computational performance.”[2]

We expect to publish a special issue focusing on human interactions with machine learning MDM models in 2024. Manuscripts should be submitted to Medical Decision Making by October 1, 2023 to receive priority consideration for the planned special issue on this topic.

To be clear, this Call for Papers does not include research that focuses on creating statistically optimal models. Our interest is neither on specific models nor ML techniques. Instead, we seek to identify high quality research on increasing the relevance, explainability, and usability of MDM ML models by humans.

Although the special issue will have a focus on machine learning MDM models, the editors of Medical Decision Making wish to emphasize that we have a broad and continuing interest in the topic of human-model interfaces. We will always welcome research manuscripts examining the interface between human users and other types of MDM models, such as regression-based risk prediction models and decision-analytical models relying on decision trees, Markov models, or other disease and/or system process simulations.

References

  1. Cipriano L. Evaluating the Impact and Potential Impact of Machine Learning on Medical Decision Making. Medical Decision Making 2023;43(2).
  2. Weyant C, Brandeau ML. Personalization of Medical Treatment Decisions: Simplifying Complex Models while Maintaining Patient Health Outcomes. Medical Decision Making. 2022;42(4):450-460. doi:10.1177/0272989X211037921

Tutorial and Explainer Articles

The journals Medical Decision Making (MDM) and MDM Policy & Practice (MDM P&P) are seeking tutorial and explainer articles that illustrate best practices in medical decision making research, practice, and policy making.

Medical Decision Making seeks to publish tutorial articles that teach fellow researchers and practitioners methodological best practices and cutting-edge techniques. Tutorial articles should approach their topics at a level higher than a foundational textbook yet remain accessible to a reader who lacks experience in the specific techniques being discussed. Put another way, MDM tutorials should be written to meet the needs of someone who asks, “I want to [build a X type of model / clearly communicate Y type of risk data / evaluate the cost-effectiveness of Z type of intervention / etc.]. What’s the right way to get started?”

In MDM Policy & Practice, we will publish explainer articles that highlight the relevance and applicability of MDM techniques to solving the practical problems of real-world situations. Explainers should be written for a broad audience that might include practicing clinicians, policymakers, journalists, and/or patients as appropriate to the topic. For example, an explainer article might explain in how cost-effectiveness analyses can help clarify which novel and expensive therapies are actually worth investing in, or it might discuss what kinds of patient-provider interactions actually align with the concept of shared decision making in the MDM literature.

It is our intent that both tutorial and explainer articles will be regular features in our journals. Authors interested in writing either tutorials or explainers are highly encouraged to contact the MDM Editorial Office (mdm-journal@umich.edu) to receive feedback on their topic ideas prior to submission.

(October 2020)