Journal of Participatory Medicine
Co-production in research and healthcare, technology for patient empowerment and fostering partnership with clinicians.
Editor-in-Chief:
Amy Price, DPhil, Senior Research Scientist, The Dartmouth Institute for Health Policy and Clinical Practice Geisel School of Medicine, Dartmouth College, USA
CiteScore 3.1
Recent Articles


Patient engagement in research is the meaningful and active involvement of patient/caregiver partners (i.e., patients and their family/friends) in research priority-setting, conduct, and governance. With the proper support, patient/caregiver partners can inform every stage of the research cycle, but common barriers often prevent their full engagement.

The adoption of artificial intelligence (AI) in health care has outpaced education of the clinical workforce on responsible use of AI in patient care. Although many policy statements advocate safe, ethical, and trustworthy AI, guidance on the use of health AI has rarely included patient perspectives. This gap leaves out a valuable source of information and guidance about what responsible AI means to patients. In this viewpoint coauthored by patients, students, and faculty, we discuss a novel approach to integrating patient perspectives in undergraduate premedical education in the United States that aims to foster an inclusive and patient-centered future of AI in health care.

Public deliberation is a qualitative research method that has successfully been used to solicit lay people’s perspectives on health ethics topics, but questions remain as to whether this traditionally in-person method translates into the online context. The MindKind Study conducted public deliberation sessions to gauge the concerns and aspirations of young people in India, South Africa, and the United Kingdom in regard to a prospective mental health databank. This paper details our adaptations to and evaluation of the public deliberation method in the online context, especially in the presence of a digital divide.

Waiting has become an unfortunate reality for parents seeking care for their child in the emergency department (ED). Long wait times are known to increase morbidity and mortality. Providing patients with information about their wait time increases satisfaction and sense of control. There are very few patient-facing artificial intelligence (AI) tools currently in use in EDs, particularly tools that are co-designed with patients and caregivers.

Infectious diseases disproportionately affect rural and ethnic communities in Colombia, where structural inequalities such as limited access to health care, poor sanitation, and scarce health education worsen their effects. Education is essential for preventing and controlling infectious diseases, fostering awareness of healthy behaviors, and empowering communities with the knowledge and skills to manage their health. Participatory and co-design methods strengthen educational programs by ensuring cultural relevance, enhancing knowledge retention, and promoting sustainable community interventions.

Chronic health conditions (CHC) are a recognized risk factor for the experience of problems in sexual function (PSF). Only a subset develops severe symptoms of sexual distress, the defining criterion for clinically relevant sexual dysfunction (SD) according to the ICD-11. Data on the contribution of specific CHC to clinically relevant SD symptoms and related healthcare needs are limited, hindering targeted interventions.

Launched in January 2022, the SingHealth Patient Advocacy Network @ Department of Medicine (SPAN@DEM) represents the first emergency department-specific advocacy group in Singapore. This initiative marks a significant advancement in local patient advocacy efforts because it employs a shared collaborative model to address the needs and concerns of patients within the unique context of the emergency department environment. SPAN@DEM emerged in recognition of the limitations of existing cluster-level advocacy groups, which are inadequate to address specific challenges inherent to the fast-paced, high-pressure nature of the emergency department.

Recommendations from professional bodies, including the Royal College of Psychiatrists, advise mental health practitioners to discuss problematic online use with children and young people. However, barriers such as knowledge gaps and low confidence in initiating discussions often prevent these conversations from happening.



The use of artificial intelligence (AI) in healthcare has significant implications for patient-clinician interactions. Practical and ethical challenges have emerged with the adoption of large language models (LLMs) that respond to prompts from clinicians, patients and caregivers. With an emphasis on patient experience, this paper examines the potential of LLMs to act as facilitators, interrupters, or both in patient-clinician relationships. Drawing on our experiences as patient advocates, computer scientists, and physician informaticists working to improve data exchange and patient experience, we examine how LLMs might enhance patient engagement, support triage, and inform clinical decision-making. While affirming LLMs as a tool enabling the rise of the “AI patient,” we also explore concerns surrounding data privacy, algorithmic bias, moral injury, and the erosion of human connection. To help navigate these tensions, we outline a conceptual framework that anticipates the role and impact of LLMs in patient-clinician dynamics and propose key areas for future inquiry. Realizing the potential of LLMs requires careful consideration of which aspects of the patient-clinician relationship must remain distinctly human and why, even when LLMs offer plausible substitutes. This inquiry should draw on ethics and philosophy, aligned with AI imperatives such as patient-centered design and transparency, and shaped through collaboration between technologists, healthcare providers, and patient communities.