You Got This!

What Civil Engineering Can Teach Us About Ethics

Contemporary software engineering is at a crucial juncture in the evolution as a discipline. We’re professionalizing and expanding our abilities, but in doing so we’re encountering dramatic new risks and venturing into new ethical territory. In this way, we share similarities with the expansion of traditional civil engineering during the industrial revolution. In this talk I’ll discuss what lessons can we learn from that industry, and how we can try to avoid making some of the same mistakes. I’ll also give a basic introduction to engineering ethics, discuss some examples of ethical problems from my own career, and explore how we can try to improve our ethical decision-making by incorporating ethical reasoning into the different stages of our work.

Featured in

Sponsored by


I'm Richard! I've been living in London ten years but just moved to New Zealand like a month ago, so I'm coming to you from New Zealand. It is winter, it is 5.47 am and still dark outside! If I struggle to talk at all, that is my excuse! I'm here to talk to you today about what is best in life. So, a bit about my background. I started out at university studying mechanical engineering, which I honestly hated. And I switched to studying philosophy, which I loved. Now it's more than ten years later, and my official job title is front-end engineer. Go figure!

So I'm a front-end engineer these days, but I usually refer to myself as a developer. I've always felt a bit uncomfortable with the title of "engineer", like I couldn't put my finger on why. It felt like I was claiming some sort of prestige that I had not earned. I didn't complete my engineering degree or study computer science, and mostly self-taught, like many of you are, I'm sure. I like to think I'm reasonably okay in my job, but I feel I'm still hacking things together a lot of the time. Back in the day, many of us who websites called ourselves web masters or web miss stresses, which I think is an amazing job title.

I'm sad I didn't start my career to have a "webmaster" in the career part of the CV. It's like social justice warrior, used as an insult, but bad ass. Then the title fell out of fashion, and we became web developers. Now many of us have the professional-sounding title of front-end engineers. I don't think that's a bad thing, or that we should stop calling ourselves engineers. You can call yourself whatever you want, a techno-Viking, or an ninja full stack unicorn if that makes you feel better. We used to be working on Dinky little websites thrown together, and now many of us are working on massive professional organisations that form the professional backbone of multi-million dollar corporations. We've come a long way of being web masters or web mistresses. A lot of us are self-taught. Don't get me wrong, this is great. It helps to ease barriers to entry. As the industry gets more professional, it might be worth thinking about other trappings of professionalism.

If we are going to call ourselves engineers, there are ethical duties and codes of responsibility that go along with that title. 100 years ago, civil engineering was in a very similar situation to how the tech industry is now. As the industrial revolution receded behind them, engineers found new ways to use the fancy new technologies they had developed.

They grew more sophisticated in their approach, and their projects ballooned in scale and complexity. But as these projects became more ambitious, there was an accompanying problem: a rise in major engineering disasters. The turn of the 20th century saw a wave of epic structural failures, including massive bridge collapses, and also the Great Boston Molasses Flood, which if I had to name my favourite disaster of all time, this would have to be it just for the mental image of a tsunami of liquid sugar travelling 35 miles an hour consuming everything in its path, terrifying, but kind of delicious. Anyway, these disasters had a profound effect on the way that the public saw engineering and forced engineers to confront their shortcomings.

As a result, they began to regulate themselves more intensely in an established standardised industry codes of ethics. So whether an is ethics? Ethics is a branch of philosophy devoted to answering questions about what is best in life. Questions like what is best? What is the good life? How should I live? And I know what you're thinking: I can just imagine the cogs in your software development minds. You are thinking, that's easy. Whether an is best is both spaces and tabs on alternating lines. The good life is spending the entire sprint refactoring your team-mates' code. One should live by outsourcing your job to India and spending all day on TikTok. How should I behave towards other people? Interrupt them with the headphones on, and the purpose of life is obviously replacing everything with JavaScript! You're all monsters! Moral issues can get us worked up. Think of abortion and euthanasia for starters.

Because these are such emotional issues, we often let our hearts to be arguing while our brains go with the flow. But there are other ways of tackling these issues, and that is where philosophers can come in. They offer us ethical rules and principles that enable us to take a cooler view of moral problems. So, much like React provides us with a framework to help us handle DOM interactions, ethics provides us with a moral framework we can use to find out ways with difficult issues. Just as we have many different competing JavaScript, like Vue, Angular, and so on, there are many different moral theories like convention - learning about the frameworks helps you understand how you think about morality, but not necessary for us to go into them in depth right now in order to be able to tell right from wrong.

Philosophers like to do things called thought experiments which are like real experiments only better because you never need to get out of your armchair. One of the most famous is the trolley problem. I'm sure many of you already are familiar with it. It is a clichéd one these days. If you haven't heard about it. Here is the deal. There is a runaway trolley barrelling down the railway tracks. Ahead there are five people tied up unable to move. The trolley is heading straight for them. You're standing some distance off in the train yard next to a lever. If you pull the lever, it will switch to a different set of tracks. You notice there is one person on the side track. You have two options: do nothing, and the trolley kills the five people on the main track. Pull the lever diverting the trolley on the side track killing one person. Which one is the most ethical choice? This is the part of the talk where I would have an interactive part, but we're doing this online, so I'm going to post a message on the Discord. Feel free to respond, react with a thumbs-up or down, whether you would or not throw the lever. No wrong answers!

Imagine instead of a switch, standing on a bridge over the tracks next to an extremely large man, the trolley is coming, and the only way to stop it is to push the large man on to the tracks. He is the only one big enough to slow down the trolley. He's looking you right in the eyes, and he's begging you not to do it. What do you do? Again, I'm going to chuck a question in the comments on the Discord. Feel free to respond. I would be interested to see your responses.

Generally, when I've done this in person, I've found - so the trolley problem has been the subject of many surveys which finds nine out of ten respondents would throw the switch to kill the one and save the five. And that has been my experience when I've done a show of hands in person. In the large man scenario, this situation reverses and only one in ten push him on to the tracks. Incidentally, a 2009 survey of professional philosophers found only 68% would throw the switch, 8% would not, and the remaining 24% had another view or could not answer. So if you're ever tied to a train track, you get a hope that the person with the switch is not a moral philosopher!

Why the difference in the two outcomes? One theory is that it is because two different parts of your brain are fighting with each other. Some researchers looked at people's brains using MRI machines and demonstrated that personal dilemmas like pushing a man off a footbridge engage brain regions associated with emotion, whereas impersonal things like diverting a trolley by flipping a switch, asserted in regions ... and these brain processes compete with each ownership when you try to make a tough moral decision. Basically, inside your brain, you've got a monkey and a robot fighting over the controls.

Every time you try to make a moral decision, they duke it out. The monkey understands something simple like pushing somebody off a bridge and it's horrified but doesn't understand something complex like a mechanical switch, so in that situation, the gut response is reduced, and we are able to throw the lever without feeling such a crushing sense of moral harm. Now, some people have a stronger monkey, and some have a stronger robot and that is great because both are useful in different situations.

This is kind of tricky for we programmers, we because we work on complex, abstract problems, which might make it difficult for our monkey brains to trigger moral responses. By the way, this you think it is hard for programmers to experience the full range of ethical response, then spare a thought for autonomous vehicles. Self-driving cars don't have meat brains and you can't make an ethical algorithm, you can make it as good as the person programming it, and we can't agree whether to use tabs or spaces. There are tricky problems here that self-driving cars will face. We prefer a self-driving car swerve into trash rather than hit someone but computers can make these decisions quicker than we can. If we decide in advance what we want to do, they will follow our instructions. So we probably want to programme our car to hit a single adult rather than a busload of children, right? But what if the adult is a Nobel-prize-winning cancer researcher? What if the adult is driving the car? Would you choose to buy a self-driving car that is designed to sacrifice your life to save others?

Researchers at MIT came up with a nice solution for this. They built an app to mine data on people's answers to different trolley problems, so they can use it to help them decide how autonomous cars should behave in different scenarios. The website is called moralmachine and you can go there and start judging scenarios. You have to choose between a male athlete driving the car and a Jay-walking baby. On the one hand the baby doesn't know not to cross on a red signal, but on the other hand it might grow up to be Hitler, so it is a tough call. It disturbance out we are collectively bad at these moral calculations. After concluding the moral machine study, MIT released some of their results, and we can learn three things from their data.

Firstly, people would prefer that criminals die in car accidents rather than dogs, but they valued the lives of criminals vastly over cats. Secondly, BBC viewers still clearly prefer the previous incarnations of Doctor Who. I think personally it is sexist, but you can't argue with the data! Thirdly, in the future, it would be wise to walk around wearing a stethoscope and lab coat so you're not murdered by a self-driving car.

Moral machine is a cool idea but it doesn't help us because we can't outsource all of our ethical decision-making to the internet. We are individuals working on our laptops and we have these ridiculous meat brains where we have to make our own decisions whether to kill baby Hitler. Sometimes we make the wrong call.

Let's shift gears a minute and consider the Volkswagen emissions scandal. You may recall that VW added a special software to millions of diesel cars when it would know their exhaust was being checked by regulators. They managed to bypass emissions standards in the US, EU, and else with, for a period of about five years. Their workaround allowed them to emit 40 times more nitrous oxide that causes 40,000 early deaths in the UK alone. It's pretty safe to assume that VW's technical hack is likely to result in several,000 premature deaths plus thousands more cases of asthma and other lung disease. As someone who recently contracted asthma in the last few years, possibly due to London's air pollution, I took it kind of personally. So when I heard last year, or when I heard a few years ago that one of the one of the engineers at VW was imprisoned for his role in the scandal, I thought good.

I've got to give credit where it is due. It is a brilliant technical hack. It's ingenious. I've got to imagine if the engineers who created it must have felt proud of themselves at the time. I can imagine myself being in that situation feeling smug about it. But at the same time, you wonder why no-one spoke up saying, "Do you think maybe we're being complete arseholes here?" How did they get it so wrong? Are they inherently bad people? Maybe it was because the monkey part of their brain was unable to deem with the complexity of the problem. You've got cars, software hacks, air pollution, and decades later people you don't know might die. It's all a bit much for the poor monkey to handle.

We established earlier that ethical reasoning involves an internal struggle for control, and the weird thing about humans is that sometimes we forget ethically when we are so focused on achieving a goal that we forget no think about the consequences of our actions. Or else, we justified to ourselves in ways that don't stand up to scrutiny but never stop to properly reflect. I'm sure we've done this at some point. I definitely have, and it has led so some of my biggest screw-ups. When you're looking at a wall of code, it's easy to forget about the humans that will be affected by your decisions, and unlike civil engineering, it's usually pretty easy for us to fix mistakes. You just roll out a patch or an update. In tech, we like to move fast and break things, but we don't want to move fast in oncoming traffic and break people.

I think the monkey brain is a factor in a factor in many ethical labs today, whether in Facebook enabling fake news, or Equifax. I want to believe that the people making these decisions are doing so because they're not thinking hard enough about the consequences and the people affected by their actions. However, there are also those who say, "I don't know all about that ethics stuff. I'm just an engineer. It's not my responsible", like Mr Von Braun who didn't care what claims to turn a blind eye to as long as he was allowed to play with his rockets. So to be clear, nobody is exempt from having to behave ethically. Scientists and engineers are not a special group that get to be amoral. Ethics contaminates everything, whether you're building rockets or designing algorithms to help police identify gang members, you have a duty to consider how they might be used. With so many examples of ethically compromised decision-making in tech, it's easy to get pessimistic.

There is good news, though. If it is easy for people to act unethically if they don't think about it, people behave more ethically if you remind them to. For instance, a group of researchers at Newcastle University hanging up posters of staring eyes was enough to significantly change people's behaviour. It made people twice as likely to clean up after themselves. If just a poster of eyes can achieve that much, imagine what else we could accomplish with a few well-placed reminders.

I'm not saying we should attach googly eyes all over our office and computer screens, even though it would help me make proper eye contact on Zoom calls and meet-up talks, but I suspect it would feel creepy, like being in a cartoonish surveillance state. Probably not a good idea. But if we want to establish an organisational culture where people tend to act morally, then I think the reminders can be a productively tool to help us achieve this.

I mentioned before that many industry bodies introduced formal code of ethics in the early twentieth century. These came along with more legal regulations, and barriers to entry, which I don't think is good for our industry, but ethical codes are a great idea. They're a good way to remind people to act ethically, because basically, when you tell people don't be a dick, they're less like to be a Dick. We already do this with codes of conducts on open source GitHub repositories, and conferences and other events, including You Got This, which has a fantastic Code of Conduct on the website which I encourage you all to read. We can do this at our organisations too. The most important thing is to set appropriate expectations for ethical behaviour.

There are loads of other codes around, including the one for ACM as Catherine mentioned earlier. Read over the different codes, discuss them with your colleagues and think about what sort of ethical principles you choose for your own work, team and company. You can use an existing code of ethics or make your own. Once you've chosen an ethical code, communicate it with your team. How you communicate it is up to you. For example, you could include it in your onboarding for new starters. Ethical checks to checklists and documentations to new projects, run internal publicity campaigns. The important thing is that it becomes part of your team and company culture. The act of communicating speck expectations is important to empowering team members before it is uncomfortable or too late.

A few years ago I was working for a consultancy who assigned me to a website for a client I didn't approve of. I got so invested right away in solving the technical aspects that I didn't think and stop if I was morally okay working for the client until I was deeply invested. I moaned about the client and they told me, if you didn't work for this client, that's fine, but you should have said something at the start of the project. It made me realise there it was okay to say no to client projects but also the appropriate time to do that is before you start work. The later you leave it, the harder it is to say no. The next time a dodgy client came along, I felt more comfortable expressing my concerns up front and we ended up turning down the client. If we establish and reinforce team norms that it is okay to speak up when uncomfortable, we can avoid these situations.

I tend to think of this like encouraging developers to submit bug reports and point out problems in your applications or processes. If everyone feels empowered to speak up, then you're all better off. By the way, Kevin tells me next month's meet-up is going to include at least one talk about how to say no. If this is something you're interested in can be definitely attend that one.

On a related note, if you speak up about ethically dubious practices at your workplace and your employer doesn't listen, you may have a duty to report it the authorities, or otherwise make is public. A basic dilemma in engineering ethics is that an engineer has a duty to their client or employer, but an even greater duty to report a possible risk to others, client or employer, following to follow the engineer's directions. A classic example of this is the Challenger Space Shuttle disaster. NASA engineers raised warnings about the faulty O rings in the boosters and the dangers posed by the low temperatures on the morning of the launch. Managers disregarded these warnings and failed to adequately report these concerns to their supervisors. It was later argued that in these circumstances, the engineers had a duty to circumvent their managers and shout about the dangers until they were heard.

I mentioned building ethics checks into processes as a regular reminder to encourage ethical thinking as early as possible. A friend who works as a psychotherapist tells me their training includes ethics checks as a core part of their process, so whenever they're trying to make a tough decision, they have questions to use to trigger different types of emotional responses.

You've got the first one here would you be happy for everyone to know the decision you made in I think a monkey-brained question. It is good at triggering emotional responses like shame. For example, if you're considering being lazy about making a website accessible, imagine there is a disabled person sitting next to you and whether you would be comfortable explaining your code choices to them.

The second one here: do you think the consequences are acceptable? Seems designed to trigger more of a consequentialist response, more utilitarian. It's a rationalist robot-brained approach. Would you recommend the same course of action to others? That reminds me of categorical imperatives which is that you should only do something if you're okay with it being a universal law. So that is cool too if that is your thing. I think these are a great start and feel free to build off them or tailor them to your own work.

Finally, we can help engineers develop more embassy think for their users by meeting them in person. Get devs to sit in on user-testing sessions. Empathy for your users helps up design better user-centred solutions too, so it is win-win. These ideas are a start. They won't fix everything. They won't stop the fact that... [inaudible].

As web masters and web mistresses, we have the power to help shape the web. As Catherine said earlier, we have to remember that this power comes with a great responsibility to do the right thing. Thanks! I'm happy to answer any questions you might have. Hopefully there is not too much feedback. We will see.