
“Just because you can, doesn’t mean you should” – Emerald de Leeuw.
Our sister site Otia.io, a digital magazine that celebrates passions and interests in the global tech community, recently recorded a thought-provoking podcast on all things Ethics in Tech.
We wanted to highlight and give a platform to some of the technologists thinking about the bigger picture and focusing on these major issues. The podcast is your guide to the world of why you should care about internet cookies, and what exactly the “Campaign to Stop Killer Robots” is all about.
You can expect to hear a panel of experts discuss things like automated weapons, big tech companies, and surveillance capitalism. We also deep-dive into big data, micro-targeting online, privacy, trust, outdated business models, and of course, where we go from here.
If you’ve never heard of these things, this is a great place to start, as we can guarantee at least some of the issues apply to you in your everyday life. But if there’s just one thing you take from this discussion, it’s that nobody is immune to the impact technology has on our choices – ethical or otherwise.
This is a broad scale subject, but we had to start somewhere, and hearing from our panel on how tech ethics has come to play a huge role in their personal and professional choices is the eye-opener we all need.
Our podcast host Zuzia Whelan is joined by our panel; software engineer Laura Nolan; Amnesty Tech’s deputy director Rasha Abdul Rahim; Irish Times technology columnist Karlin Lillington; tech entrepreneur, journalist, and former Twitter Ireland MD Mark Little; and lawyer and data ethics specialist Emerald de Leeuw.
Killer Robots
In late 2017, Google software engineer Laura Nolan found out that her work was supporting an US military surveillance project. The project, called Maven, was planned to help analyse reams of drone surveillance footage to track people of interest, or select potential strike targets.
The idea was to free up the people who would have been analysing this footage, but who were perhaps starting to feel the effects or life or death situations at their fingertips. The other, bigger issue was that there was just too much footage to get through with humans alone.
Some people think this is ethically unproblematic – “it’s just analysing video” – Laura told us. But context is everything. This process is very much a part of the military kill chain.
She started to lose sleep over the ethics of Maven, and was one of several Google employees who signed an open letter to the company to stop its involvement in Maven.
The following year, Laura left Google and began volunteering with the Campaign to Stop Killer Robots. Since then, she has also joined a group pushing for the decentralisation of Covid-19 contact tracing apps.
On the panel, Laura told us about her thoughts on the ethical issues of allowing machines to decide who lives and dies – and why we’re often quick to assume that AI is always right.
Building trust
Rasha Abdul Rahim is deputy director at Amnesty Tech, at Amnesty International’s London Secretariat. She also heads up the AI and Big Data teams, and works on autonomous weapons.
In this episode, she lends her expertise to explain the intricacies of the contact tracing app. With our own government rolling out Ireland’s version recently, there’s never been a better time to learn about what happens when you give an app your personal health information.
In terms of the human rights risks, contact tracing is an essential part of pandemic response. But what happens when you digitise that process? You invade people’s privacy.
Under human rights law, there needs to be a proportionate response to a crisis – we need to use the least invasive means necessary to achieve a public health objective.
One of the issues with these apps is that they needed to be rolled out so quickly, there was barely any time to regulate for them or to determine best practice.
The bottom line is that there needs to be trust between the consumer and the app. So, if we choose to use these apps, how can we trust that our data will be safe and not left vulnerable to third parties or malicious actors?
Informed consent
One thing many of us may not realise is just how personal our anonymised data is, especially when it can be cross referenced with other anonymised data.
Even when we willingly hand over our information, the terms we agree to are not always transparent or easy to understand.
Data such as where you go every day, at what time, and how long you stay is highly personalised. In the case of fitness tracking devices, the more high-level they become, the more in tune they grow with our biology and health.
Irish Times columnist Karlin Lillington has been writing about technology and the ethical implications around it for about two decades.
When it comes to gathering data, she says, there’s always an inclination to do so because you can.
But what happens to this data once it’s out of our hands? For how long is it kept by the companies that have gathered it, and what will it be used for by them?
More importantly perhaps, what will be the long-term implications of having this data gathered?
What’s to say there won’t be far reaching consequences we’re not even aware of yet, and when we become aware, all our information will have already been gathered.
Karlin tells us about her experiences covering these issues for the Irish Times, and how close we are to “too late”.
Regulating
If you go into a shop to buy an item, you would be surprised if that item was free, as long as the shop owner could follow you around forever.
Emerald de Leeuw says she goes far with this online shopping analogy, because online it also goes pretty far.
Not only can companies gather data on our shopping preferences, but they can gather reams of data about your politics, sexuality, hobbies, etc. They can now target you with ads for absolutely anything.
The result is we all end up with different ideas of what reality is. The bottom line has to be education.
What’s possibly even more difficult is legislating for technologies which are developing faster than the governments and bodies tasked with regulating them.
Emerald has vast experience in data ethics, GDPR, and the laws around privacy. She took us through how making technology human-centric can have a significant impact on how it impacts our lives and choices.
Why Listen?
This podcast exists because these issues aren’t going away. What you will hear in these discussions in the beginning of many conversations to be had on ‘where we go from here’.
Mark Little starts from the assumption that technology is a power for good in the world, he says — but he adds that it also has the potential to be weaponized.
Mark is a journalist and the founder of companies Storyful and Kinzen. Both of these operate on a model of greater transparency in how we consume media.
In our panel discussion, Mark, Rasha and Laura discuss the model of surveillance capitalism in big tech and come up with ideas on how these models can change in the future.
Technology, and the companies that provide it to us, need to be people-centric. So what exactly does this mean when it comes to building a new business model?
The jumping off point of this episode was whether or not we are at an inflection point. Is the genie out of the bottle? Have we gone too far, and if so, is there any way back?
We’ve gotten so used to clicking through terms and conditions and privacy statements that we’ve become desensitised to what they mean for us.
A pivotal point in our panel discussion is how to build the future of tech in a more fair and transparent way. Some of the topics such as killer robots and military drones may seem a world away, but a lot of the developments in these technologies are impacting things a lot closer to home than you may initially think.
Have a listen to our podcast and hear for yourself how ethics in tech is important for everyone, including you.