Harnessing AI for Good: A Human-Centred Approach

28 June 2024

In a world increasingly driven by technology, Artificial Intelligence (AI) is emerging as both a beacon of promise and a potential threat. As we seek to integrate AI into the fabric of our daily lives, the challenge is not just to utilise this technology, but to do so in a way that enhances humanity rather than undermines it. If a human-centred approach is what is needed for AI to be a force for good, what roles can psychology play in assuring this and avoiding the pitfalls of misuse?

The first step in unpicking the trappings of AI and its role in society is to consider what we mean by “intelligence”. Is it the mere clinical ability to answer questions (which is effectively what AI does), or does true intelligence lie in providing insightful, meaningful responses to those questions (which AI does not)? Edsger Dijkstra, a luminary in computer science, offers a helpful perspective: "...the question of whether machines can think is about as relevant as the question of whether submarines can swim. Machines don't think." Dijkstra's analogy emphasises that whilst AI may appear to act like humans do, AI cannot think like humans do. It simply processes the data that it can access, and executes tasks based on programmed algorithms. AI facilitates high productivity, but productivity alone is not what makes us human.

Instead of a binary ‘Human versus Machine’ approach, considering human-centred AI may be a more helpful perspective. This requires a paradigm shift, placing human needs – and by this I mean the full spectrum of human needs - and values at the forefront of this technological development. This approach envisions AI as an augmentative tool, one that amplifies human capabilities and helps us achieve our goals more effectively. This is the perspective of Tom Gruber from the team who created Siri. He suggests that AI systems can be designed to enhance our health, to provide personalised feedback, and even to foster our wellbeing by encouraging pro-social behaviours (as opposed to anti-social ones). This vision of "Big Mother" instead of “Big Brother”, positions AI as a benevolent partner in our pursuit of happiness, health, and intelligence. 

This certainly sounds good, but… this potential is fraught with ethical challenges. AI must be built on foundations of transparency, fairness, and ethical integrity. Without these, it risks becoming a tool for manipulation and exploitation. How society navigates this quagmire of ethical challenges and bias is where the science of psychology can make an important contribution.

One of the most pressing, obvious, and ethical challenges in AI development is the mitigation of bias. Interestingly, an emerging but not yet widely spoken of area of AI development is ‘machine learning fairness’. Tiffany Dang, an expert in this field, explains that the issue is that AI systems learn from the data they are given, so if this data is biased, the AI will mirror and perpetuate these biases, with potentially serious consequences in areas like employment and financial services. Machine learning fairness is about ensuring AI systems are trained on diverse, representative datasets, thereby reducing the risk of biased outcomes. It logically follows that representation in data is crucial if AI is to learn from a broad spectrum of scenarios rather than being skewed towards a single or narrow set of perspectives.

Establishing Ethical Frameworks (and Global Standards)
Wide involvement of all stakeholders is key if an equitable framework is the end goal. This requires a collective effort from technologists, ethicists, policymakers, and all communities and constituencies (especially the most vulnerable) to establish effective ethical and equitable guidelines and best practices. Inclusiveness in these discussions is key in ensuring that AI development is informed by as wide a range of experiences and perspectives as possible. 

When we consider AI, we need to acknowledge that the fallibility of humans is replicated in the fallibility of AI. This recognition should drive a commitment to continual learning and improvement, allowing us to adapt and refine our approaches as we better understand AI's societal impacts. Without this humility and self-awareness, and in the absence of an inclusive approach, a universal, global standard for AI fairness will be impossible.

Psychologists’ position as science, ethics and human rights advocates, and humanity champions, lends itself to supporting this. Embracing and cultivating a growth mindset in the dynamic field of AI, and all its peripheral field, is essential. This involves embracing the lessons learned from both successes and failures, and fostering a culture of continuous learning and adaptation. By maintaining this mindset, we can better navigate the complexities and challenges inherent in AI development.

Harnessing AI for good requires a human-centred approach grounded in ethical principles, transparency, and inclusiveness. By prioritising these values, we can develop AI systems that truly enhance our lives, contributing to a healthier, happier, and smarter society. As we move forward, it is vital to remain vigilant and proactive in addressing the ethical and social implications of AI, ensuring it serves as a force for positive change. Only by doing so can we realise the full potential of AI while safeguarding against its misuse. Without a seat at the table or a psychological voice in the room, these aspects risk being lost or lessened.

 

 

Blog post by Sheena Horgan

Chief Executive Officer

The Psychological Society of Ireland