AI for social responsibility: embedding principled guidelines into AI systems

In this position talk we briefly retrace the historic and evolutionary context that led to AI's results not necessarily being used first and foremost to benefit the public that funded it, nor to necessarily focus on human values and concerns.

Next, we discuss how the AI language Constraint Handling Rules -CHR- can promote social responsibility by making it easy to embed principled guidelines into our systems, and we exemplify this idea within an application to enhance voting and decision-making power.

Finally, we examine the very notion of intelligence in the light of the more recent notion of group intelligence, and draw consequences on what might be needed to ensure that AI capabilities are put to socially responsible uses only. In particular, we identify what legislations might help place AI at the service of the urgently needed solutions for today's various crises, with the overall aim, as K. Raworth put it, to "meet the needs of all within the means of the planet".