Artificial intelligence (AI) is revolutionising aged care - offering innovative solutions, streamlining operations, and enhancing health outcomes. However, its adoption is not without challenges.
Research from around the world is reporting ever decreasing levels of consumer trust in AI, despite consumers indicating an overall trust in technology.
In any industry, it is crucial to quantify consumer trust in relation to AI use, privacy compliance, and ethical practices. However, for aged care providers, this issue is further compounded by the new Aged Care Quality Standards (Standards).
Providers must balance the potential brought by AI, against regulatory compliance with the Standards and privacy law[1].
AI already offers aged care businesses many innovative benefits:
When used responsibly, AI can enhance care provision as well as business efficacy. However, due to the human-centred nature of the aged care industry it is integral that providers take steps to mitigate associated risk, for example:
To harness AI responsibly and compliantly, providers should consider these ‘best practice’ tips:
Transparency is crucial to building trust. Providers should develop clear policies outlining how AI systems function, what data is collected, and how the data is used.
Recommendation: Ensure policies are easy-to-read and with accessibility options and encourage recipient engagement.
Generally, personal information should only be used or disclosed for the purpose it was collected.
Recommendation: Take time to explain anticipated purposes for AI systems and obtain informed consent before deployment, ensuring the benefits and risks are communicated and understood.
Robust, multi-factor security measures are essential to prevent unauthorised access and data breaches.
Recommendation: Avoid using personal information for publicly available generative AI tools and implement a data breach response plan that addresses AI use/misuse.
AI should be used to assist (not replace) workers. Human oversight remains essential for accurate and fair decision-making, particularly relating to health and well-being.
Recent amendments to the Privacy Act requires transparency and accountability measures for use of automated decision-making involving personal information.[2]
Recommendation: Regular audits and human-in-the-loop decision-making frameworks to ensure automated decisions align with ethical and legal standards.
Continuous evaluation of AI tools is required to ensure fairness, prevent discrimination, and human dignity.
Recommendation: AI tools should be trained on diverse and representative datasets to reduce bias. Ongoing monitoring should be conducted to identify and rectify any unintended discriminatory outcomes.
This article was written by Elizabeth Tylich and Ariel Bastian, Corporate Commercial.
----
[1] Aged care providers in Australia are likely to be governed by the Privacy Act because of the type of work they do and the sensitive health information they handle – even if they don’t meet the annual turnover threshold of > $3 million.
[2] While this amendment was passed with the reforms to the Privacy Act it has been scheduled to come into force in December 2026.