You Got This!

Walking Across The Ethical Minefield

Transcript

Thanks, Kevin. And thanks everyone for joining me. You can embarrassing see the top bottom of my my powerpoint slides there. But hi. Welcome to my four and half hour talk since Kevin said I I have all the time in the world.

My name is Kestral. My cat has now decided this is a a fantastic opportunity to interrupt. So this is it's Content who is my cat, who will be doing his absolute best to cause... How that corner talking. You go into that.

There you go. And I here to talk a little bit about ethics in technology this evening. So I'm kind of some of my talk doing the right thing and a world the Hall police in robots because this week, in fact having planned this talk for for ages, open Ai gave us a little ethical dilemma with the release of Gp four. And one of the issues with Gp four as they describe it, is that the robot still in the... It can often take you off on flights of fancy and give you wildly and accurate answers and completely invent quotes by people just because it thinks that's what you want to hear.

But it doesn't vary convincing, which is an interesting ethical dynamic dilemma in and of itself. So whether you are working in tech or you're writing a book. Ethics is something that impacts all of this and yet, is something that we don't really talk about very often. So I kind of work at the intersection of people and technology. I spent a lot of time working in tech companies over a number of years, then I kind of retrain and we in.

Things like psycho psychotherapy and psychology and now find myself teaching technology ethics. At a small university in London called London disciplinary school. It's very new. We're one of these kind of startup universities that's challenging the idea of what higher education means. So we don't have entry requirements for grades.

We interview literally everyone that applies the idea is that that really the best way to learn is in an inter disciplinary way, you can solve complex problems by thinking about a little bit of everything and that's doing really badly in an exam and let's face it chat Gp can take exams now, so they'll really that get a measure of what someone actually knows is not that important. Actually, what's important is passion and ascertain having ideas around changing the world. So I've I've got the real privilege of of teaching there is as one of my day jobs and I focus on ethics in technology. And really ethical frameworks and how to build thoughts around their into things you do, whether it's, you know, try to solve the climate crisis or whether it's trying to cook a meal for a family member that you hate. There are ethical ways of doing all of those things.

And I I would say, you're could a meal for a family member that you hate, the unethical thing to do is add the poison, the ethical thing to do is make them something that you know they're like. So just stay in case there's anyone on the call needs to hear that. So where are we right now? It's twenty twenty three, okay. Where we're in this magical year in the twenty first century We are twenty three years past for year two thousands.

We are somehow in the future. The twenties is a hundred years ago and also right now. And so technology has evolved beyond all recognition from where it was five, ten twenty years ago. In fact, the the first talk the evening thing in the Q and after afterwards, I kevin mention. They're even just doing things as they were done five years ago when thinking about running note is so different now and we have to get out of those kind of habits.

And if the world has changed that much in five years, how do you think it's changed in the last fifty? And In twenty twenty three, we live in this world where we're surrounded by ads and these ants seem to follow us. You know, every single person I talk to will get add at some point on their phone, and they say, holy crap. This app must have been listening to me because I had a conversation with one person about this three days ago, and now all of my on Instagram flooded with it. Obviously, that's not how it happens, but Algorithm Intelligence has got to the point where adverts service everything that we need as we need it.

Sometimes to dangerous effects. We'll talk about that in a minute. Vr is becoming a bigger thing. So it's ar, headsets are getting better, the meta, whether you like it or not is a thing that exists and who owns that experience? When you're in the the the real world in the world that's around us right now, and yes.

I know there are papers about whether the entire world is a simulation or not. For now, let's say that this is reality. What I'm in reality, I kind of have a bit of control over that experience. I've got screen in front of me right now. But I could eat it off my balcony.

Is it really upset me that much. I've got a cat in front of me right now. I could choose to either strike him, or ignore him. I'm gonna start him a little bit because, you know, he's a cute cats. But I control the experience around me.

But if you're programming someone's entire Vr experience, you own their world everything from the sound the ground makes that someone walks on when their in vr, right through to the color of the sky. All of that is controllable by you if you're designing a Vr vr experience for someone you control that whole world. Gp four and and similar sort of technologies around artificial and algorithm intelligence are not just allowing us to generate incredibly fantastic memes on things like chat Gp, but also are sparking huge ethical debates in the fields of art and anything creative where people's work is being fed into these machines. And then Ai apps like Da are able to generate new images, but they're all reiterated based on other people's work without any the accreditation. We also live in a world that is full of devices, right A few years ago, I would give talks and say, we now have, you know, devices in our pockets that are more powerful than nasa was in nineteen sixty nine, but then not just in our pockets anymore.

My Apple Watch is a few generations old and it is incredibly powerful as a piece of technologies. Computing powers is beyond things that we had thirty or forty years ago. We carry devices with us everywhere from smart rings to smartphones. We have fridge that conform Zombie bot armies, our washing machines can talk to us via Wifi and my cat can scratch at the screen but not if I move him over that. So we are in a place where technology defines everything.

It talks to us about how we think it defines how we experience the world. It impacts the way that we create. And the way that we live, everything about our existence is intrinsically linked with technology. And sometimes, as you know, as I'm sure most of you are aware, technology is layered and layered and layered and one experience could have had hundreds of thousands of people contributing to it in ways they don't even realize. There are times where I wonder the...

Whether the world will just collapse, if one person stops updating one repo that's a packet that's given out in a package manager that then is a small part of a small thing that's a small part, of a bigger thing and technology is like that. And we don't necessarily think about the tiny things that we do in the ethical implementation they may have. When I worked at Micro Microsoft a really long time ago, we were kind of scattered quite a lot of different technology at looks time, but there was never talk around end user ethics and how that might be used. There was lots of great marketing speak. And I'm sure there was an ethics team dealing with that somewhere in the company.

But if someone working on a product or a dev for a product. I didn't know what that was. And I was going out on the road talking to developers saying, hey, build on top of this product. I don't didn't know what the ethics team had had thought about, whether they had done their homework or not. And it just wasn't built my vocabulary at the time.

And that's why it's really important that we start to think about ethics. Actually it is about, but it's about ethics in technology. So how have we not questioned these things before? The answer we kinda have but I don't think we really knew what we were questioning. It's twenty twenty three now, but back in nineteen sixty six, two things were happening.

Ignorant england did some sports thing that a lot of people got really happy about. But in America, someone wrote a piece of software calls Eliza. And analyzer was the first attempt at a natural language program for computers. It was ported to a bunch of different platforms. This screenshots from the comm sixty four version.

I had it, obviously, not in the sixties. I'm not that old but. So I had it on a on a dos based Pc. And Eliza was essentially a like a therapist. And it employed psychotherapy, which is basically help people to help themselves by asking them questions in getting them to provide the answers.

So it didn't need any medical knowledge, all it needed was the ability to ask people to reflect on the things they just said. And it did that relatively well. And at the time in the sixties, there were re of articles, talking about whether or not eliza could convince people that it was human. Now in the chat after this, I'm gonna paste the link to an online version of a eliza exists now. You'll use it for thirty seconds.

If you'll agree. There is no way you could ever think that it was human. But that's my our standards today. In the sixties, this was revolutionary, particularly given as technology was in its infancy. I mean people were using this on on terminals was attached to giant great computers.

And it was an incredible application that was able to be realized as a result of it. And the conversation then came along will could something like this past the turing test. Now the turing test named because it was devised by gay icon and square celebrity, pop sensation alan sharing. Was named because he post the a computer would could be deemed artificial intelligence artificially intelligence. As long as a human questioner couldn't tell the difference between a computer and a human replying to a question.

Now that doesn't mean consciousness. So it's really interesting here. That a lot of people are very worried about technological singularity. There's an amazing paper by werner that I can point you to on that if you're more interested I teach my students a lot about that. But technology really has to be good enough.

To fall the human. And as we all know, humans are quite easy to fall. So can we develop technology that is intelligent enough to convince a human questionnaire, but a computer and a human responder are the same thing. This is kind of why the film was called the the imitation game because this was a lot of alan turing worked on in his later years. So This is a something that technology has used for years.

Television is not video. We call it a movie because it's moving images. They don't actually move. You're just seeing twenty five or thirty or sixty different still images a second. But they move just quickly enough to convince your brain that he is in fact moving.

There's no magic video. It is all just still images. And that's how screens work. That's how Tvs have worked. Since they were invented.

And it's really interesting that that this pervasive of stuff working just enough to fool. Just enough to make us think it's doing a thing, is how a lot of technology works. We call it smoke and mirrors. We call it minimum viable products, but Ultimately, if you can demo something and people are convinced it's works. Hey.

Well, don't, we've got h investor backing. But is that inherently unethical? Well, you have to think about how we've evolved from the time of eliza. So Eliza didn't really fool people, but it got people talking about the turing test something that was devised many years before. And so eliza didn't really pass the turing test.

But where are we now? I mean, we are at a point where chat Gp is relatively convincing. I've got hundreds of examples of times I've tried out chat Gp over the last few months. I'm got to give me things that could easily have been written by anyone. I've had it invent scripts but kids Tv shows.

I had it right me. Papers for things, I have tested chat Gp knowledge and yeah, it gets things wrong sometimes, but it doesn't convince sing enough. That you think it's right for a minute, and we have to fact check it. So that's algorithm intelligence. An artificial intelligence.

But How does that relate to the real world? And how do we think about things working enough so that it's not really that bigger deal, but actually it is a big deal. And the next example without apparently clicking the wrong button. Is Hand. How many of you I'm gonna ask this going four?

I can't see any of you because this is a remote session I'm sure I have used one of these before the dyson air blade. This is the first iteration of the Dyson air blade hand dryer. It kind of revolutionary dryer, you put your hands in like that, and it dry your hands. And everyone wanted them because they looked great. And you know, lots of people like, yes, great british adventure.

The guy that runs weather spoof that's really loved it. And, you know, there we go. But these hand dryers were inherently unethical. They worked really well. And they dried your hands.

And they went everywhere. And the fact that there were people complaining that they didn't work was kind of ground out by the fact that it was good enough to to convince the majority of people they were amazing. If you had darker skin shades, you would really struggle to get one of these Dyson hair dryers hand tries to work for you. That's because they hadn't done the right testing. They hadn't really quality checked their product with a diverse enough set of testers.

And so their sensors just didn't work very well. They've built something that haven't really been done before a new kind of Hand dryer great But in innovating, they did not do the required amount of testing. And they didn't think, do we need to test this on people that aren't exactly like James Dyson. Well, the complaints mounted and rather than make any kind of statement, they release. But new would improved dice an airbag dryer and that what worked better, but it was a huge embarrassment for the company.

Has they just had some kind of ethical framework, This would not have happened. Very similar of things have happened to other companies before. About eight years ago, Hp released a laptop with billing webcam, they all have them now, but they didn't all at the time. And the that came with the laptop, some of the Hp built in software. Was really revolutionary because it was one of the first bits of stuff where it would do face tracking in your webcam.

I mean, it's kind of given that pretty much any webcam or most pieces of video software will do that these days. But back then it was big of revolutionary. They touted it as a great feature. Until one channel is tried it, and it didn't work. And he did some testing and realized that it just could detect people that weren't white So here, you had a fantastic product from Hp and face tracking if you happen to look exactly like eighty percent of the people who work at Hp.

They didn't have an ethical framework for testing. They released a piece of technology into the world that had an inherent bias to it. And that's interesting because chat Gp and algorithm intelligence has those exact same biases because we haven't developed the right kind of ethical thinking before we start owning things. And it's difficult to think about doing that because if it's not your job, to build write an ethical framework, then you probably don't have the time to think about it. You are writing a feature or a tool in a piece of code.

You're, you know, you're creating a a piece of training material. That's your job. That's what you're measured on. You're not measured on having awkward conversations. A lot of people who are doing freelance coding projects are working on something in their spare time also say, well, I'm just writing the code.

The code works. Right works for anyone surely. Does it work someone who can't necessarily afford a macbook? Does it work to someone who's maybe using this on a very, very slow connection because they don't have internet in their country. Does it work if there power disruptions because you're in a country that's currently a war.

Does it work for everyone on every platform? Does it do what you want it to do with unintended consequences of the code that you're writing or the things that you're doing? It is a minefield. And it's no wonder people don't often like to talk about the ethics of what they're doing. Because everyone believes that thinking about the ethics of what they're building is someone else's problem.

And no one saved the very, very few incredibly privileged, but dearly missed Royals. Will walk through those minefield fields. But we all have to be a bit more princess, Diana. And that is probably the first time I've said that in my life. We have to think about how we navigate this minefield field of how we move through it.

Because it's not just about the things we're working on and the stuff that we're doing in the future. It is about having the vision to think about how the things you're working on could be used. No one can see into the future. But if you're writing something that could be used for nefarious purposes, like are you creating a tool that can actually power the next wave of autonomous killer attack drones that will come and destroy humanity. It's a very extreme example, but, you know, you only have to look to Sci f.

Are you writing something that is gonna be used to suppress a group of people or a person? Are you building something that you that just hasn't been sensitivity checked? And you could be creating a a fantastic piece of art you can be writing a book. You could be creating training material. You could be doing a dev session.

Have you really thought about the assumptions you're making about your audience? Have you got the vision to think about how this will look in a year's time in five years time? It's really important to think about the future. And it's not an easy dynamic fix. The concept of eu utah is that actually, never mind upsetting one person as long as there's greater goods.

That's happening. But that's not necessarily the right approach. Utilitarian isn't an ethical framework in and of itself. It's just the idea that actually the needs of the many our way that needs to you. Could you think of a time recently where a piece of technology was forced on everyone without real choice for the greater good.

Covid, The covid vaccine. I am someone who is firm believer in vaccines. And I have had all my boosters and everything, but This is a new piece technology that was rolled out to everyone around the world for the greater of goods. This was an example of tell isn't where the needs of the many our way that needs of few. But there are very few instances where that is a way of creating something and rolling something out to people.

It's really important to think about all of the vision that you have. All of the people that might be impacted by something. So how do you do that? How do you actually start thinking about things more ethically? Because ultimately, By this point in the talk, you're probably thinking, oh, oh crap.

I have a huge responsibility on my shovel and I now have no way to to know what to do about it. And I apologize for to this point and depressing you all so much with that. But there are so many examples in the world of just thinking. Can you expand the code writing to include more people? I, a fantastic example from a few years ago is the evolution of gender options in online forms.

I'm someone who I'm I am trans and the gender options for me were never very good. I could either select a female, madam woman. And now there are a plethora of more options. But even this example here, You can select up to five from a list. Is that more all less ethical than providing a free text box for people to just write what they want in?

But from a data science point of view, does that mean that the data you're collecting is somehow less valid? Well you have to think about why are you doing this? Why do you need that information? Why do you need that data? And this is from a dating website.

So it's kind of there for an obvious reason, but if you want to use that data, demographic later, you have to think about the quality of the data you're capturing Does that provide an ethical dilemma? And ultimately, this comes down to having a lot of awkward conversations. You might need to say, hello have we thought about the ethics of the thing that we're building? The thing that we're doing here, the thing that we're working on. To people who may not wanna hear that.

And if you're working on a project solo low, you have to have a word with yourself and think have I given any thoughts to the ethics of people involved in this. And there are some fantastic ways that you can get around this. Actually start to have conversations. A obligatory regulatory Tv shot, asking away the bad. So there's a a tiny little framework that you can start to use to challenge your own thinking.

And that is testing evidence, discussion and action. And when I teach this over a a a long period of time over a module, there's much much more depth than this, but it's a starting point as a starter the ten. There are these four things you can do in any situation that can start to show you the kind of ethical way forward. So whether your writing code or writing an article or a tutorial, you're writing a book, define how you are going to test And I don't mean unit testing code. I don't mean does it pass debugging.

What mean is who is this tool aimed? Are you running a command line tool for somebody? Are you writing a web app? Are you eyeing something that will be used on a phone? Are you writing something that will be delivered online or face to face.

Are you writing something that's a book? Think about how you will test. Whatever it is you're doing. And if you haven't planned to make a plan to do testing. And make your test better as possible If you're writing a book for example, you get sensitivity readers to come and read your book.

They're not just proof readers. They're not just gonna check that's spelling the punctuation the grammar. They're gonna read it and from their experience from their point of view, say, yeah, you know what? You kind of use the black character here in a bit of a trophy way and that not good. You might wanna to change that.

Obviously, you as the author of that thing still have the final decision. But choose a diverse set of testers and make sure you test it on your target audience even if it's a talk you're giving. Think really carefully about how you collect your evidence as well. By having a comment in your code, or having your iphone notes app open is not an adequate way of collecting evidence especially if you're working a team. If you're working in a team, you need a collaborative element space where you can collect everything together about who your testers are, what people's concerns are.

Allow people to comment on those, prioritize them. In exactly the same way as you would do with code issues, you have to think about ethical issues as equally important. Build up that evidence space, create a a little notion scenario or or something in any kind of shared thing that you have going on to make sure that you've got that front and center and that you and your team can look at it. And if it's just you working on something, always keep an ethics look for whatever it is you're doing. Even if the ethics log is just yet, everything's looking good.

Testers would come back positively and then congrats if that's the case. I don't think I've ever done anything where I didn't have to look back at it go. Oh, okay. Maybe a tweet required. Discussion is also key.

Who is in your circle? Have you got the right people and the right eyes on thing. If you are working in a team, this discussion point is essential. If you're working so then you need to find trusted people that you can talk to. Because even if you're working on a project solo, you will still get up inside your own head.

And you cannot have a conversation with yourself. You need to discuss these things. With people who aren't your testers. But people who aren't hands on with it, who are people who understand you and the way you think the way you work. And finally once you've done that, you've done some testing.

You've got your testers. You have evidence everything. You have had those discussions you can act. You can decide what you need to do next. What you need to change, what you need to fix, what you need to flag to hire up people.

And then you go through the cycle again. Because once you've made changes, you need to go back to those testers and say, hey, what about this? Any better? You evidence that. Yeah.

It's great now. You discuss it, hey, great job. No more action needs to be taken. But this is an iterative process. You go through time and time and time again, and it's so important to just have this as a little kind of mental checklist.

And say, okay, have I arranged good testing? Am I ever got a place to keep my evidence? Have I got people I can discuss this with? And then when the time comes, what actions have you taken? The only way that we can work more ethically and and start to confront the vast of the question of ethics in technology is for us to confront it and to start thinking about it and start realizing It's not a bad thing.

It's okay. If something you've written isn't quite ready yet. It's not making you a terrible person by not automatically being the most ethical you can possibly be. Creating an ethical product, whatever that product is is a journey. And we're all different.

No single person can have all the life experiences in the world. I'm now I'm trans, and I'm Jewish, but I'm also white and I'm relatively comfortable. So I know that I don't have all the experiences in the world. I'm relatively privileged. And a lot of people in in my son relatively privileged, so I have to think who do I talk to?

Where do I go? As a writer, I have a huge network of of people that I know who are sensitivity readers and and all the things that I need. And it is absolutely okay to rely on them and it is okay to ask the questions. The worst thing you can do is think... I'm pretty sure this is okay.

I'm just gonna go with it. So that's been a bit of a whistle stop tool. Of ethics and technology. Hopefully a bit of kind of a start of ten or some food for thought for those of you that haven't really confronted the topic before. I teach this as a whole module.

So obviously, I I could only cram so much into a small talk, but If you look at that kind of initial framework, you start to think about bringing it into conversations you're having. And really just start thinking, what are the ethic implications? Of anything you do, then you can get somewhere. And is it a pain? Does it add more work?

Absolutely. Is it the right thing to do? Arguably, I would say, yes. So thank you. For listening to me talk about ethics in technology and doing the right thing and world of native robots.