The collection of news and links provided in Appropriate Future is undertaking a bit of retooling and refocusing — specifically to target a single subset of our previous range of topics — on AI, data and algorithms, and its impact on the planet, humans, privacy and public policy. If you know of anyone that might be interested in the topic - please share!
People should not be slaves to machines: The EU’s Artificial Intelligence Act — the most significant international effort to regulate AI to date — has asked groups to weigh in on designating “high risk” AI, and the European Evangelical Alliance has some opinions. ‣ Khari Johnson in wired.com
The oxymoron of the day: Algorithmic Humanitarianism. Use of AI in adjudicating refugee procedures and immigration decisions is “an idea suffering from the mechanical, technocratic, and scientific acclimatization of human existence devoid of ethics, justice, and morality,” says Dr. Nafees Ahmad of South Asian University (SAARC)-New Delhi. “In human rights protection, refugee rights, and immigration decisions, AI has been adversely impacting Refugee Status Determination (RSD) procedures and immigration judgments across the world.” ‣ moderndiplomacy.eu
Democracy Is Losing Its Race With Disruption.
Many politicians appear uncertain whether to get cozy with the visionary leaders of Google, Apple, and Facebook—or to campaign against the pollution of the American information ecosystem, the amplification of hate speech and harassment, and the striking concentration of market power among a small number of companies. ‣ theatlantic.com
With the US withdrawal from Afghanistan, the Taliban got hold of US military’s biometric records of Afghan citizens who had been helping the US forces in the country, and who now face the risk of being targeted by the Taliban. Can governments and militaries responsibly and securely handle biometric records? ‣ Usama Kjilji in aljazeera.com
We are on the cusp of one of the most dangerous arms races in human history:
Where the ethical battle is hottest, however, is in relation to “human-out-of-the-loop” systems: completely autonomous devices operating on land, under the sea or in the air, programmed to seek, identify and attack targets without any human oversight after the initial programming and launch. The more general term used to describe these systems is “robotic weapons,” and for the attacking kind “lethal autonomous weapons” (LAWs). There is a widespread view that they could be in standard operational service before the mid-21st century. Hundreds of billions of dollars are being invested in their development by a mixture of the US, China, Russia and the UK.
‣ AC Grayling via prospectmagazine.co.uk
Venture Beat has interviews with the six winners of their Women in AI awards, including Soltani Panah, who is working on how AI can tackle complex social problems ‣ venturebeat.com
Human rights climb the business school curriculum ‣ Andrew Jack in ft.com
Should government use the web to nudge our behaviour? ‣ Alex Hern in The Guardian