logo

Reflective Piece: Ethics in Computing in the age of Generative AI

Unit 1

Introduction

The rapid rise of generative artificial intelligence (AI) since late 2022 has prompted widespread excitement and concerns across the world. While AI promises significant benefits, its governance remains a critical challenge due to its complexity and broad societal impact. Correa et al. (2023) point out that there exist at least 17 AI ethical principles internationally, but agreement on how to apply them is still rare. Meanwhile, Deckard (2023) emphasizes the difficulty of applying traditional governance models to AI, given its unique horizontal and virtual nature. Reflecting on these insights, I propose a hybrid governance approach: establishing universal ethical standards complemented by domain-specific regulations tailored to the distinct domains where AI is being utilized, with its context and risks considered. This balanced model aims to maximize innovation and safety while fostering global trust and accountability.

The Need of General, Global Ethical Principles

The afore mentioned rapid growth of generative AI highlights the urgent need for universal ethical principles to guide its development and use globally. AI presents complex risks such as privacy breaches, algorithmic bias, and misinformation, which often extend beyond specific domains or borders. The European Union (2024) acknowledges that while many AI applications pose limited risk, existing laws are not strong enough to address more harmful outcomes. Establishing broad principles like human benefit, transparency, and responsibility (IEEE Standards Association, n.d.) provides a vital baseline to mitigate widespread harm and protect human rights.

These principles are also essential to building public trust in AI technologies. The European Commission (2019) defines trustworthy AI as lawful, ethical, and robust, requiring elements such as human oversight, privacy, fairness, and accountability. Without clear ethical frameworks, AI’s notoriously enigmatic decision-making processes risk deteriorating user confidence. International initiatives like the World Economic Forum’s AI Governance Alliance (2024) and the United Nations (2024) emphasize that shared global commitments to these values are critical for responsible AI advancements and universal societal benefits.

Furthermore, universal ethical guidelines offer coherence amid emerging national and sector-specific regulations. While a lack of common AI definitions challenges multinational governance, alignment with OECD principles (2024) promotes trustworthy AI based on inclusiveness, transparency, and robustness. Such frameworks help reduce regulatory fragmentation and foster harmonization across communities. By encouraging early risk management, these global principles support innovation alongside safety.

The Case for Domain-Specific AI Regulation

A domain-specific approach to AI regulation offers focused benefits for development and security, balancing innovative thinking with risk management. The European Union’s AI Act (2024) employs a risk-based framework, placing stricter requirements on high-risk AI systems in critical sectors like infrastructure and employment, while allowing more flexibility for minimal-risk applications. This approach fosters regulatory clarity and predictability, promoting responsible innovation and investment.

In development scenarios, tailored regulations recognize sector-specific challenges. In finance, for instance, traditional frameworks already address many AI concerns, but more personalized guidance is needed for AI systems used in core business activities or customer interaction processes. Initiatives like the Hong Kong Monetary Authority’s GenAI Sandbox enable experimentation in a safe environment, aligning innovation with risk management (Crisanto et al., 2024). Such models reflect real industry situations and ensure regulations remain proportional and effective.

From a security perspective, domain-specific rules enable improved transparency and human oversight in high-risk applications. In financial services, tasks like credit scoring or insurance underwriting, classified as high-risk under the AI Act, require explicit standards for governance (European Union, 2024). These safeguards are crucial for ensuring AI reliability and protecting customer rights. Additionally, domain-focused policies directly address issues with third-party providers and privacy, bolstering operational resilience (European Commission, 2019).

Similarly, the World Health Organization (2021) emphasizes the need for ethical oversight in health-related AI. Their guidance places human rights at the centre of AI in healthcare, ensuring systems are designed and deployed with safety, accountability, and impartiality in mind. Such examples reinforce the value of domain-specific governance to align AI’s potential with sector-specific risks.

Towards Balanced and Responsible AI Governance

It’s evident to me after considering the aforementioned AI governance points that both domain-specific and global initiatives are progressing concurrently. This dual approach is not only pragmatic but essential and aligns with my own position. By establishing general ethical principles that ensure the protection of human rights while allowing tailored, sector-specific regulations, we are moving toward a governance model that supports both safety and continuous innovation.

Personally, I deeply value the importance of fostering environments where AI developers can explore and innovate. Technological progress often depends on having the space to experiment, especially within the unique needs and challenges of each domain. However, I also recognise the fragility of the boundary between innovation and ethical compromise. The fast-paced, competitive nature of AI development, particularly among private companies racing for leadership in model capabilities, can create incentives that risk crossing ethical lines. While some may be committed to principles of safety and fairness, we cannot depend on goodwill alone to safeguard public interest.

Therefore, I believe that the most urgent need lies in establishing universal, globally recognised rules that protect individuals and their fundamental rights. A baseline of ethical AI use must be non-negotiable and legally enforceable, ensuring that innovation never comes at the expense of human dignity, safety, or freedom. Once this foundation is in place, more targeted regulations can be applied to reflect the unique characteristics of each domain.

In short, a hybrid approach, grounded in global ethical commitments and reinforced by domain-specific guardrails, offers the most balanced path forward. It recognises the complex realities of AI development while placing the protection of people at the centre of technological progress.

References

Corrêa, N.K., Galvão, C., Santos, J.W., Del Pino, C., Pinto, E.P., Barbosa, C., Massmann, D., Mambrini, R., Galvão, L., Terem, E. and de Oliveira, N. (2023) ‘Worldwide AI ethics: a review of 200 guidelines and recommendations for AI governance’, Patterns, 4(10), p. 100857. Available at: https://doi.org/10.1016/j.patter.2023.100857

Crisanto, J.C., Leuterio, C.B., Prenio, J. and Yong, J. (2024) Regulating AI in the financial sector: recent developments and main challenges. FSI Insights on policy implementation, no. 63. Basel: Bank for International Settlements. Available at: https://www.bis.org/fsi/publ/insights63.htm

Deckard, R. (2023) ‘What are ethics in AI?’, BCS - The Chartered Institute for IT. Available at: https://www.bcs.org/articles-opinion-and-research/what-are-ethics-in-ai/

European Union (2019) Ethics guidelines for trustworthy AI. Available at: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai

European Union (2024) Artificial Intelligence Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence). Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689

IEEE Standards Association (no date) General principles. Available at: https://standards.ieee.org/initiatives/artificial-intelligence-systems/

United Nations System Chief Executives Board for Coordination (2023) United Nations system white paper on AI governance. Available at: https://unsceb.org/united-nations-system-white-paper-ai-governance