When AI Meets Animal Welfare: Exploring the Moral Frontiers of Intelligence
At a groundbreaking event in Santiago de Compostela, researchers explored how AI could reduce animal suffering, transform ecosystems, and even challenge our understanding of moral consideration—raising profound questions about the future of sentient beings, human or artificial.


We're living at a time when AI isn't just reshaping human society, it's reshaping how we think about all sentient beings. Could AI be used to detect and reduce suffering in animals? Could it even become sentient itself one day?
At this event, researchers and thinkers from across organizations came together to dive into these questions. And when you bring so many inspiring minds into one room, something magical happens: ideas spark, perspectives collide, and new possibilities are born.
The debates were passionate. On one hand, the risk of AI amplifying suffering on an unimaginable scale. On the other, the possibility of radically improving the lives of countless beings:

Wild animals: detecting extreme suffering through sensors, cameras, and AI; even delivering food or vaccines via drones.

Farm animals: improving welfare with sensors to track thermal comfort, aggression, stress, and natural behaviors.

Ecosystems: applying machine learning to monitor habitats and predict risks such as fires, droughts, or disease outbreaks.

Food innovation: accelerating R&D in plant-based proteins, improving taste, texture, and stability.

Animal testing alternatives: using AI to develop organ-on-a-chip models and advanced simulations.
And then came the most challenging debate: AI itself.

If advanced systems were ever to become sentient, even in the smallest way, would we owe them moral consideration?
Should we care about AI systems if they ever become capable of feeling, even if they never think or act like humans do?
And what would it mean if we got it wrong, if we denied moral status to a sentient AI, or granted it to one that wasn't?
This is just the beginning. The future of AI won't only be about productivity or efficiency. It will also be about ethics, empathy, and the profound question of who, or what, we choose to include in our moral circle. Who knows what might come out of this?