Software Development of Tomorrow Comes With Ethical Challenges
insight
2019-04-25
Suvi Kava

Software Development of Tomorrow Comes With Ethical Challenges

As technology grows more and more powerful, tech companies need to prepare for sometimes unexpected ethical issues. Our UX Analyst Suvi offers some viewpoints to the discussion that is likely to become even more complex in the near future.

As technology grows more and more powerful, tech companies need to prepare for sometimes unexpected ethical issues. Our UX Analyst Suvi offers some viewpoints to the discussion that is likely to become even more complex in the near future.

Our day-to-day lives are filled with technology whose underlying principles are unfamiliar to most of us. In addition to enhancing our lives, it also sometimes sparks worry. And rightly so.Ideally, service providers can tackle these fears with transparency, user-centred design and by recognising what kind of ethical considerations are required when building this particular service. Examples of what this could mean in practice range from the safety protocols of self-driving cars to handling private data in the type of services that utilise it.

Here are four things to consider when seeking to build ethically sound digital services.

1. Responsibility issues must be made visible

Perhaps more than ever, tech companies need to consider their role from the ethical perspective as well: is this the kind of project we want to be involved with, and what kind of effect will it potentially have on the users?

Perhaps more than ever, tech companies need to consider their role from the ethical perspective as well.

A modern software project typically has multiple parties involved in its execution: the client, design/development company, investors, distributor and – in many cases – the end users themselves. So whose role is it to raise concerns about the ethical issues if need be?

Whatever the answer, it is clear that discussion about the possible risks and how to minimise them is needed – as even the most innocent software can pose a privacy or safety risk if this aspect is not taken into consideration.

2. Transparency can be increased via public statements and external certificates

Tech companies have made public statements about their ethical principles. In addition to stating what the company can do, it is equally important to be clear about what it won't do.

Futurice has listed five ethical principles regarding its AI-related projects. All of them have to do with human rights and environmental responsibility.

In addition to stating what the company can do, it is equally important to be clear about what it won't do.

The choices and principles of technical implementations can be made even more transparent with ethical certifications. There is also a clear market for tools that can pinpoint ethical problems in AI algorithms, for example. Our client Saidot is a startup whose business idea is built around the idea of creating transparent "identities" for AIs, which in turn creates more trust towards service providers. This inspired us to conduct a study called Trust & AI, available to download here AI & Trust -report.

3. Power is always an important issue and must be used responsibly

If taking care of the responsibility aspects fall easily on the tech companies' tables, it makes sense that the same holds true for the issues of power. The internal rules of software define things that may range from the logic of recommendations in a streaming service all the way to what kind of patient information a healthcare application is allowed to access.

In the context of social media, there has been a lot of discussion about opinion bubbles, where people only ever see content from like-minded people. The same holds true for streaming services, which raises questions about the role of the services as influencers. Is there, for example, content that everyone should be made aware of regardless of their previous browsing history? Is it even possible for service providers to be neutral in this regard?

The notion of power is also tied to availability. If self-driving cars would be programmed to always prioritise the safety of their drivers in potential accidents, this could lead to situations where wealth could theoretically determine who survives a car crash. Besides the designers' good intentions, the value of human life would, in practice, be measured in dollars.

4. Keeping the line between human and digital transparent is tricky

In speculative literature, visions of the future are often filled with human-machine hybrids. Things like virtual reality, AR and digital avatars already blur this line. One ethical guideline that may follow from this is the aim to be as transparent about is as possible.

As social media applications make their way to VR, we are probably going to face new versions of the questions that arose some ten years ago with virtual world Second Life: what does projecting your own identity to a virtual world mean in terms of "real life": new strong experiences and forms of communication are sure to bring their own privacy and safety issues, for example.

What are the building blocks of the current ethics discussion in tech?

By following the discussion around the topics above, it seems that the current ethical principles are drawn from the following sources: human rights, laws and regulations such as GDPR and Web Accessibility Directive (see our blog post on the topic) – and last but not least, the social responsibility of individual companies.

Besides the official regulations, we need to start building an environment of trust when it comes to technology. This could even be seen as an expansion of the ethics courses currently offered by our public schools.

That being said, any all-encompassing rules for these questions are not likely to be agreed upon in the near future. What instead remains vital is a responsible mindset and transparent motives in all software projects.

Suvi Kava

About the author

Suvi Kava

Latest Blog Posts

Read all Posts