Anti-bias resources
F’xa pcpro.link/299fxa
This feminist chatbot immediately expresses a bias of its own by excluding desktop users – it only works on smartphones. The forced interactive design somewhat conceals the overall message too, and it certainly won’t help with the actual coding. Still, it presents some thought-provoking ideas about equality online.
IBM pcpro.link/299ibm
A much more committed and focused statement than the video-based page mentioned above. Broadly speaking, it’s an advert for IBM’s own AI projects, but it sets out some useful concepts, along with mathematical examples, links and references that can help you to smarten up your own AI projects, no matter how humble.
Open Global Rights pcpro.link/299ogr
This online forum and think tank makes an insightful attempt to distinguish “unfair” bias from other types. Not far beneath the surface are contributions from big brands active in AI, but there’s nothing wrong with that – after all, public rights bodies themselves can’t be expert AI developers.
Salesforce Team Einstein pcpro.link/299sales
Salesforce is a company that’s battling with real data, real customers and real problems right now – so its experiences with artificial intelligence and the lessons it has learnt are pretty much priceless. Your own projects probably won’t be as ambitious, but the abilities, aims and approaches of Salesforce’s AI are a tremendously helpful yardstick to measure them against.
Accenture pcpro.link/299acc
UK consultancy firm Accenture knows about the potential problems AI can throw up – and it keeps a human rights lawyer on the team to help identify and address them. As usual, you’ll find very few lines of actual code here, but if you’re worried about whether you might get dragged through the courts for an AI error, it’s a good place to start.