9 Sept 2025
AI can support better Parkinson’s care, but patients and healthcare professionals want to make sure it’s trustworthy
Artificial intelligence (AI) is becoming an increasingly important part of healthcare innovation, offering new possibilities for earlier diagnosis, symptom monitoring, and treatment optimisation in Parkinson’s disease (PD). Alongside this potential, people with Parkinson’s (PwP) and healthcare professionals (HCPs) have raised crucial questions: Can AI be trusted? How should results be communicated? And how can new technologies support care without replacing the human connection?
As part of the AI-PROGNOSIS project, people with Parkinson’s (PwP) took part in interviews and focus groups, while healthcare professionals (HCPs) joined co-creation workshops. Altogether, 269 people with and without PD from 16 countries, and 84 HCPs from 9 countries, contributed to these activities, which also included 2 rounds of focus groups and surveys, 3 rounds of workshops, and 3 prototyping sprints.
Patient perspectives on AI
PwP described both opportunities and concerns around AI-supported care:
Autonomy: AI could strengthen independence by offering personalised insights but might also reduce autonomy if information was controlled by clinicians.
Beneficence: Participants saw benefits in AI’s ability to reveal patterns across complex data sets that often remain hidden in short visits.
Non-maleficence: Concerns were raised about psychological harm if predictive tools delivered risk scores without clear follow-up. Many emphasised that risk information should always be communicated by a healthcare professional, not an app.
Justice: Not everyone may have equal access to AI-driven care, due to differences in resources, disease stage, or digital literacy.
Trust and privacy: PwP wanted transparency and control over data, particularly genetic and health information, with strict guarantees of confidentiality.
Healthcare professionals’ perspectives
HCPs also contributed their views in AI-PROGNOSIS activities, identifying key conditions for responsible AI use:
Data security: Strict safeguards are essential to prevent misuse of personal data.
Bias: Algorithms must be trained on representative datasets to avoid reinforcing inequalities.
Human oversight: AI should be a support tool, not a replacement. Responsibility for decision-making must remain with clinicians.
Communication: Any AI-generated insights must be explained clearly, without creating unnecessary anxiety for patients.
Themes from co-creation workshops
When PwP and HCPs came together in AI-PROGNOSIS co-creation workshops, five recurring themes emerged:
Trust and security – concerns about data misuse and reliability.
Transparency and education – the need for openness about how AI works.
Bias – awareness that incomplete datasets can create unfair results.
Human oversight – agreement that AI should complement, not replace, care.
Psychological impact – recognition that poorly explained results could cause distress.
PwP and HCPs agree that AI has strong potential to improve PD care, from early detection to medication management. But trust depends on transparency, clear communication, and preserving the patient–doctor relationship. By organising interviews, focus groups, and workshops, AI-PROGNOSIS ensures that these voices directly inform tool design, embedding ethical considerations and patient needs into every stage of development.

