Back to insights

AI in Aged Care: Top Tips to Balance Innovation with Privacy and Ethics

11 Jun 2025

Alerts
Technology

Artificial intelligence (AI) is revolutionising aged care - offering innovative solutions, streamlining operations, and enhancing health outcomes. However, its adoption is not without challenges. 

Research from around the world is reporting ever decreasing levels of consumer trust in AI, despite consumers indicating an overall trust in technology. 

In any industry, it is crucial to quantify consumer trust in relation to AI use, privacy compliance, and ethical practices. However, for aged care providers, this issue is further compounded by the new Aged Care Quality Standards (Standards). 

Providers must balance the potential brought by AI, against regulatory compliance with the Standards and privacy law[1].

Growing Role of AI

AI already offers aged care businesses many innovative benefits: 

  • Predictive health analytics: analyses medical data to predict health risks such as cognitive decline.
  • Smart monitoring and wearables: used to detect falls, track movement, and alert caregivers.
  • Chatbots and virtual assistants: helps residents with medication reminders, appointment scheduling, and queries.
  • Automated documentation: reduces administrative burden, allowing staff to focus on care. 

Privacy and Ethical Challenges

When used responsibly, AI can enhance care provision as well as business efficacy.  However, due to the human-centred nature of the aged care industry it is integral that providers take steps to mitigate associated risk, for example: 

  • Surveillance vs. dignity: AI monitoring must balance safety with independence. Standard 1: The Person emphasises choice, dignity, and respect.
  • Privacy and security: Standard 7: Organisational Governance requires the governing bodies of providers ensure risk management systems are in place to protect personal information (allowing misuse may also amount to a breach of the Privacy Act).
  • Informed consent: Standard 1: The Person requires providers ensure care recipients or their representatives have their given informed consent before using their information with AI.
  • AI bias and accuracy: Standard 8: Clinical Care emphasises safe and effective care. AI tools used in clinical settings, must be evidence-based, appropriate, and reliably accurate.

Best Practices for AI Adoption

To harness AI responsibly and compliantly, providers should consider these ‘best practice’ tips:

Implement transparent AI policies

Transparency is crucial to building trust. Providers should develop clear policies outlining how AI systems function, what data is collected, and how the data is used. 

Recommendation: Ensure policies are easy-to-read and with accessibility options and encourage recipient engagement.

Obtain informed consent 

Generally, personal information should only be used or disclosed for the purpose it was collected. 

Recommendation: Take time to explain anticipated purposes for AI systems and obtain informed consent before deployment, ensuring the benefits and risks are communicated and understood. 

Enhance security 

Robust, multi-factor security measures are essential to prevent unauthorised access and data breaches. 

Recommendation: Avoid using personal information for publicly available generative AI tools and implement a data breach response plan that addresses AI use/misuse.

Maintain Human Oversight

AI should be used to assist (not replace) workers. Human oversight remains essential for accurate and fair decision-making, particularly relating to health and well-being. 

Recent amendments to the Privacy Act requires transparency and accountability measures for use of automated decision-making involving personal information.[2]

Recommendation: Regular audits and human-in-the-loop decision-making frameworks to ensure automated decisions align with ethical and legal standards. 

Ensure Ethical AI Use

Continuous evaluation of AI tools is required to ensure fairness, prevent discrimination, and human dignity. 

Recommendation: AI tools should be trained on diverse and representative datasets to reduce bias. Ongoing monitoring should be conducted to identify and rectify any unintended discriminatory outcomes. 

This article was written by Elizabeth Tylich and Ariel Bastian, Corporate Commercial.

----

[1] Aged care providers in Australia are likely to be governed by the Privacy Act because of the type of work they do and the sensitive health information they handle – even if they don’t meet the annual turnover threshold of > $3 million.

[2] While this amendment was passed with the reforms to the Privacy Act it has been scheduled to come into force in December 2026.

Next

Share Insight

Next
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Stay up-to-date and subscribe to receive our latest news and insights