[I originally started writing this post on 19th July, and didn’t complete it before I started on the altMBA. I’m still posting belatedly because it’s important that we as software professionals think about these issues. The advantage of completing this now is that the talks have now started appearing on InfoQ.]
To recap, Ann posed these questions to those of us who write software for a living:
- Is there a problem with ethics?
- What can I as a software engineer do about it?
- What might stop me from doing that?
- What might get in my way?
- And how might I get over that?
I wrote about Cori Crider’s talk previously [InfoQ].
Caitlin McDonald talked about being a Data Citizen and the Data Ethics Canvas that can be injected into the work that we do. Government Digital Services has also released a Data Science Ethical Framework.
Caitlin referenced Cathy O’Neill’s book Weapons of Math Destruction which talks about the impact of algorithms that on modern life, which lack transparency.
Caitlin draws parallels between how citizens interact with the law, and how data citizens interact with data science. How can we as data citizens push for more effective and fair data science, which impacts us? A first step might be insisting on the same level of transparency for ethical data practices in data science as there are in law.
You have to treat data like people, because most metadata is about people.
Caitlin’s talk is available on InfoQ.
Data Science in Action: Emma Prest and Clare Kiching talked about Data Kind UK’s work. Data Kind is a charity that does pro bono data science work for good.
What can happen when you start off with the best of intentions?
An app to track potholes can lead to potholes being fixed in richer areas only because residents have smartphones. The Samaritans’ Radar app was launched with the intention of helping users who were posting tweets that suggested suicidal thoughts; it was suspended following privacy concerns.
When starting a project:
Embed ethics discussions in your project process: at kick-off, discussing data and results, planning implementation
Involve a diverse range of people: client, data scientists, developers, users to get a fuller picture of the context and risks. Data scientists are never experts in the subject matter. And the users … bring them in earlier.
Think about explanability. Will your clients be able to understand the model and explain decisions to end users?
Project intent matters. But also consider the unexpected consequences.
[InfoQ link] for Emma and Clare’s talk.
Andrea Dobson’s topic was Ethics: A psychological perspective.
Why do good people make bad decision?
If you have to report an ethical issue would you prefer to conform to the rest of the group?
In less clear situations, levels of conformity increase.
In 1997 a Korean Air flight crashed, killing 223 people because the co-pilot was afraid to question the poor judgement of the pilot.
Andrea talked about Milgram’s experiment and the notion of psychological safety, which allows people to feel able to speak up.
Here’s Amy Edmondson on Building a psychologically safe workplace.
Many of us create technology with positive impact.
What makes the world better but doesn’t feel morally good?
Human intuitions don’t scale … we can do a huge amount of good but becuase of scope insensitivity bias we don’t feel good enough about doing it.
How can we use limited resources to help others most effectively? How can charities use their resources in the most effective way?
The UK’s National Institute of Clinical Excellence uses a unit of measurement called the QALY - the “quality-adjusted life year”. The NHS is prepared to spend £20,000 per QALY.
Malaria is no longer the leading killer of people in Afria due to mosquito nets.
The most effective actions are not always the most intuitive.
No Lean Season provides low interest loans to rural workers in Bangladesh so they can buy a bus ticket, travel to Dhaka and work there as a rickshaw driver.
Prioritise good actions at scale.
- think once, then think twice (on whether your biases are affecting the sitaution) and think hard
- focus on what you (personally) can do
- have courage
- remember that you have the power
Harry Trimble Power, ethics and design education
It’s getting harder to tell design and software developmnt apart.
Richard Pope: Software is politics now. Because software is power.
Today so much power in society lies with designing and building software.
Read Stephen McCarthy’s post Designers as power brokers.
Design is a position of power. Designers need to reconigse the power relationships in the things they build.
Read George Aye Design Education’s Big Gap: Understanding the Role of Power.
Open APIs in the telecoms sector could create powerful new services.
The Data Permissions Catalogue has a number of different patterns for the sharing of data.
A Responsible Development Process
Developers and product owners have the power. We can create things out of nothing.
But how do you know what to do?
Responsible Technology considers the social impact it creates and seeks to understand and minimise its potential unintended consequecnes.
- Do not knowingly create or deepen existing inequalities
- Recohgnise and respect dignity and rights
- Give people confidence and trust in thei use
This is not an ethical bible, but guidelines for responsible use.
Traffic ranges from 20 visits per day, to 2000 per hour.
He started entering chatbot competitions in 2010, and in the first year placed higher than Siri (which was 14th).
The only places that don’t yet interact with Mitsuku are the South Pole and North Korea.
Because it talks to so many people he has to keep it child friendly. It’s designed for general chit-chat.
Supervised learning is very time-consuming. But unsupervised learning can produce poor results.
Mitsuku’s does have attackers. They try to corrupt it but because of the supervised learning method the haven’t succeeded so far.
A weak response from the chatbot encourages a bullying-victim interaction. So find a middle ground. He does this by issuing a warning system. you get 5 strikes and then you’re banned (IP address) for 24 hours.
If you’re mean to the bot, the bot will be mean to you. This is a fairly human trait.
It Mitsuku learns something she learns it only for the user who’s chatting.
Steve looked at a pattern matching bot, which is expensive to maintain but effective and predictable. NLP bots tend to be fairly unpredictable. The bot uses a language called AIML which is derided by others, but he shows them his medals.
We shouldn’t personify robots … they’re machines.
Topic-specific bots tend to be more focused - there are roughly 10,000 ways to ask for a pizza. Mitsuku has 350,000 intents.
In summing up, Ann reflected how how we can build psychological safety into our workplaces.
Facebook would like us to believe that its decisions are deliberate, but they probably aren’t.
What do you measure? How can you allow for unpredictable outcomes? And be deliberate about the ethical choices ou make.
There are three aspects to policing ethics in tech:
- The law
- Educating users
- Tech companies - getting Google/Apple to buy in
Additionally, power comes from us techies.
In June 2018, Baroness Buscombe stated that the behaviour of AI and ML systems will be covered by existing Health and Safety legislation.
For further reading, look at the Resources and Press sections on the Coed:Ethics home page.