Conversational artificial intelligence (AI)


Page last updated 8 July 2025

 

Conversational artificial intelligence (AI) 


This resource aims to assist GPs weigh up the potential advantages and disadvantages of using conversational AI in their practice. 
 
 

‘Conversational artificial intelligence (AI)’ refers to technologies that can engage in natural and human-like conversations. Conversational AI encompasses tools such as advanced chatbots, virtual agents/assistants, and ‘embodied conversational agents’ (avatars) Prominent examples include OpenAI’s ChatGPT, Google Gemini, Microsoft Copilot, and Anthropic’s Claude.

Many conversational AI tools use generative AI techniques (creating new content) to engage in these conversations and can be considered Generative AI as well as Conversational AI.  The focus of this fact sheet is on the conversational AI properties of these technologies. These AI technologies are distinct from AI scribes, which convert a conversation with a patient into a clinical note, summary, or letter that can be incorporated into the patient’s health record. The RACGP has a separate resource on AI scribes.

Conversational AI tools are trained on vast quantities of data from the internet, including articles, books. Unlike ‘simple’ chatbots that rely on pre-defined rules and scripts to respond to the user, ‘advanced’ conversational AI chatbots use large volumes of data together with AI technologies (such as machine learning, natural language processing, and automatic speech recognition). The current generation of conversational AI tools incorporate generative AI which is inherently probabilistic (subject to chance or variation) and can behave unpredictably1. These innovations mean that the tool can discern the intent of the user’s inputs and ‘learn’ from users’ behaviour over time.

There is no doubt that conversational AI could revolutionise parts of healthcare delivery. GPs should be extremely careful, however, in using conversational AI in their practice at this time. Many questions remain about patient safety, patient privacy, data security, and impacts for clinical outcomes.

 

Research to support the use of conversational AI is trailing behind its actual use but possible applications are listed below:

Clinical applications

  • Answering patient questions regarding their diagnosis, potential side effects of prescribed medicines or by simplifying jargon in medical reports.
  • Providing treatment/medication reminders and dosage instructions.
  • Providing language translation services.2
  • Guiding patients to appropriate resources.3
  • Supporting patients to track and monitor blood pressure, blood sugar, or other health markers.4
  • Triaging patients prior to a consultation. 5
  • Preparing medical documentation (eg, clinical letters, clinical notes and discharge summaries).6
  • Providing clinical decision support by preparing lists of differential diagnoses,7 supporting diagnosis, 8 and optimising clinical decision support (CDS) tools (for investigation and treatment options).9
  • Suggesting treatment options and lifestyle recommendations. 4

Business applications

  • Automating billing and analysing billing data for business purposes. 10
  • Providing a platform for scheduling appointments and sending reminders to patients.11
  • Assisting in the preparation of promotional materials for general practice business owners.12
  • Collecting and collating patient information.13

Ƶal applications

  • Summarising the medical literature and answering clinicians’ medical questions.14 15
  • Modelling sensitive and empathetic communication and providing support about ways to deliver bad news to a patient.16, 17 
  • Personalising educational activities for medical students and practitioners. 18

Research applications

  • Simplifying elements of scientific writing, such as by generating literature reviews. 19
  • Efficiently analysing large datasets. 19
  • Enhancing drug discovery and development. 20
 

The ethical and privacy issues inherent to using sensitive health data and the regulation of medical products, have meant that the use of AI is not as widespread in medicine as it is in other domains. There are still many clinical, privacy, and workflow issues to be resolved before conversational AI can be used safely and to its fullest potential in clinical settings.

Clinical issues

  • Conversational AI tools can provide responses that appear authoritative but on review are vague, misleading, or even incorrect. Training data might be out-of-date. AI can produce ‘hallucinations’, 21 in which the output appears credible or plausible, but is in fact incorrect or unverifiable. In a healthcare setting, these problems of inaccuracy create an obvious risk of harm. Where these tools are used in a pre/post-clinical or consumer-facing context, there is also a risk that patients will change their behaviour as a result of a conversational AI tool’s incorrect advice, or disregard opinions from a medical practitioner in favour of that of an AI tool.22
  • Biases are inherent to the data on which AI tools are trained, and as such, particular patient groups are likely to be underrepresented in the data. There is a risk that conversational AI will make unsuitable and even discriminatory recommendations, rely on harmful and inaccurate stereotypes, and/or exclude or stigmatise already marginalised and vulnerable individuals.23
  • There is a risk that conversational AI could spread harmful misinformation to healthcare consumers, with consequences for safety. 13
  • The techniques used by AI to make recommendations are opaque, making it hard for users to trace the sources for the tool’s output and independently evaluate the evidence the tool is using to make its conclusions.24
  • Conversational AI performs best on straightforward clinical tasks.8 AI tools might present medical advice without caveats about areas where the evidence is unclear or subject to professional debate.22
  • Some conversational AI tools are designed for medical use (such as Google’s MedPaLM and Microsoft’s BioGPT), but most are designed for general applications and not trained to produce a result withing a clinical context. The data these general tools are trained on are not necessarily up-to-date or from high-quality sources, such as medical research.22 Quality of output is likely to improve if the AI is trained on a large dataset of clean data from medical/pharmacological academic literature and patients’ medical records.11,15
  • If used to assist with diagnoses or treatment suggestions, these tools need to be approved by the TGA under ‘software as a medical device’ regulations. 25

Privacy, security, and legal issues

  • The sensitive nature of health data creates a particular challenge for the developers of AI tools and the clinicians who use them. Cybersecurity incidents could lead to breaches in patient confidentiality, with implications under Australian law. GPs should never enter sensitive or identifying data into a conversational AI tool.3
  • Many jurisdictions are still grappling with the challenges of regulating AI tools. In Australia, legislators are still working out how best to mitigate potential harms.

Workflow and practice issues

  • While using conversational AI might save time, the need to comprehensively check its outputs might partially negate this benefit for GPs.
  • When used to create text for business applications (eg, information for patients on a general practice’s website), the conversational AI tool might legally be considered an author of the work, with implications under copyright law.12
  • Many conversational AI tools are not (yet) compatible with or integrated into clinical information systems, meaning they cannot easily be assimilated into existing workflows.
  • there may be rules prohibiting the use of AI generated reports for legal proceedings.26

Ƶ issues

  • In medical education, there are obvious threats to fair assessment in the form of cheating and plagiarism.27 Some conversational AI tools might be able to circumvent plagiarism detection software.28
 

Ahpra has neatly summarised medical practitioners’ responsibilities when using AI in its article, .29 In brief:

  • GPs are ultimately responsible for the delivery of safe and quality care. GPs should always check AI outputs and the service terms of the tools (ie. management of information/indemnity clauses) before using them in practice.
  • GPs should only use conversational AI as a resource to supplement information from other sources. It should never be used as the sole source for clinical decision-making.
  • GPs should involve patients in the decision to use AI tools that require input of their personal information and obtain informed patient consent when using patient-facing AI tools; for example, if using conversational AI as part of an intake procedure.
  • Before bringing conversational AI into their practice workflows, GPs should learn how to use it safely, including the risks and limitations of the tool and how and where data is stored. If the tool is to be used, GPs should be transparent about its use in the practice privacy policy.
  • GPs must ensure that the use of the conversational AI tool complies with relevant legislation and regulations, as well as any practice policies and professional indemnity insurance requirements that might impact, prohibit or govern its use.

It is also worth considering that conversational AI tools designed specifically by, and for use by, medical practitioners are likely to provide more accurate and reliable information than that of general, open-use tools.4 These tools should be TGA-registered as medical devices if they make diagnostic or treatment recommendations. 25

 

This document does not constitute legal advice. When considering if, and how to use conversational AI tools, practices must seek independent legal advice. The RACGP takes no responsibility for any loss of any description by a practice or person as a result of relying on this document.

 
  1. Coiera E, Fraile-Navarro D. AI as an Ecosystem — Ensuring Generative AI Is Safe and Effective. NEJM AI. 2024;1(9):AIp2400611.
  2. Souza LLd, Fonseca FP, Martins MD, et al. ChatGPT and medicine: a potential threat to science or a step towards the future? Journal of Medical Artificial Intelligence. 2023;6(19).
  3. Haltaufderheide J, Ranisch R. The ethics of ChatGPT in medicine and healthcare: a systematic review on Large Language Models (LLMs). npj Digital Medicine. 2024;7(1):183.
  4. Chow JCL, Wong V, Li K. Generative Pre-Trained Transformer-empowered healthcare conversations: current trends, challenges, and future directions in Large Language Model-enabled medical chatbots. BioMedInformatics. 2024;4(1):837-52.
  5. Hong G, Smith M, Lin S. The AI will see you now: feasibility and acceptability of a conversational AI medical interviewing system. JMIR Form Res. 2022;6(6):e37028.
  6. Ali SR, Dobbs TD, Hutchings HA, Whitaker IS. Using ChatGPT to write patient clinic letters. The Lancet Digital Health. 2023;5(4):e179-e81.
  7. Liu J, Wang C, Liu S. Utility of ChatGPT in clinical practice. J Med Internet Res. 2023;25:e48568.
  8. Rao A, Pang M, Kim J, et al. Assessing the utility of ChatGPT throughout the entire clinical workflow: development and usability study. J Med Internet Res. 2023;25:e48659.
  9. Liu S, Wright AP, Patterson BL, et al. Assessing the value of ChatGPT for clinical decision support optimization. medRxiv. 2023:2023.02.21.23286254.
  10. Zaidat B, Lahoti YS, Yu A, et al. Artificially intelligent billing in spine surgery: an analysis of a Large Language Model. Global Spine Journal. 2025;15(2):1113-20.
  11. Chow JCL, Sanders L, Li K. Impact of ChatGPT on medical chatbots as a disruptive technology. Front Artif Intell. 2023;6:1166014.
  12. Ryan B. ChatGPT for medical practice websites - what medical practices need to consider. Sydney: Avant; 2023. Available at  
  13. Harrer S. Attention is not all you need: the complicated case of ethically using large language models in healthcare and medicine. eBioMedicine. 2023;90.
  14. Sandmann S, Riepenhausen S, Plagwitz L, Varghese J. Systematic analysis of ChatGPT, Google search and Llama 2 for clinical decision support tasks. Nature Communications. 2024;15(1):2050.
  15. Johnson D, Goodman R, Patrinely J, et al. Assessing the accuracy and reliability of AI-generated medical responses: an evaluation of the Chat-GPT model. Res Sq. 2023.
  16. Ayers JW, Poliak A, Dredze M, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med. 2023;183(6):589-96.
  17. Webb JJ. Proof of concept: using ChatGPT to teach emergency physicians how to break bad news. Cureus. 2023;15(5).
  18. Han J-W, Park J, Lee H. Analysis of the effect of an artificial intelligence chatbot educational program on non-face-to-face classes: a quasi-experimental study. BMC Medical Ƶ. 2022;22(1):830.
  19. Sallam M. ChatGPT utility in healthcare education, research, and practice: systematic review on the promising perspectives and valid concerns. Healthcare. 2023;11(6):887.
  20. Rehman AU, Li M, Wu B, et al. Role of artificial intelligence in revolutionizing drug discovery. Fundamental Research. 2024.
  21. Gravel J, D’Amours-Gravel M, Osmanlliu E. Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clinic Proceedings: Digital Health. 2023;1(3):226-34.
  22. Walker HL, Ghani S, Kuemmerli C, et al. Reliability of medical information provided by ChatGPT: assessment against clinical guidelines and patient information quality instrument. J Med Internet Res. 2023;25:e47479.
  23. Fournier-Tombs E, McHardy J. A medical ethics framework for conversational artificial intelligence. J Med Internet Res. 2023;25:e43068.
  24. Laranjo L, Dunn AG, Tong HL, Kocaballi AB, Chen J, Bashir R, Surian D, Gallego B, Magrabi F, Lau AYS, Coiera E. Conversational agents in healthcare: a systematic review. J Am Med Inform
  25. Therapeutic Goods Administration (TGA). Artificial Intelligence (AI) and medical device software. Canberra, ACT: TGA; 2024. Available at
  26. Avant Writing an expert witness medicao legal report 2025.Available at
  27. Grassini S. Shaping the future of education: exploring the potential and consequences of AI and ChatGPT in educational settings. Ƶ Sciences. 2023;13(7):692.
  28. Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613(423).
  29. Australian Health Practitioner Regulation Agency (Ahpra). Meeting your professional obligations when using Artificial Intelligence in healthcare. Melbourne: Ahpra; 2024. Available at

Advertising

Advertising