Key issues in tech ethics

Jeremy Renwick Blog

There’s definitely something about attending meetings at the Houses of Parliament. The corridors of power oozing history, the sense of events happening around you and the hint of a promise of a smidge of influence. So having the opportunity to opine on “What are the most pressing issues in tech ethics?” was too good an opportunity to miss.

The big agenda

To try and be representative, I asked my Facebook, LinkedIn and Agilesphere communities the question and got a very interesting range of replies. The sheer range of these issues felt important in its own right.

Here are just some of the topics that came up:

  • As you’d expect after the Facebook / Cambridge Analytica revaluations, targeted political advertising in social media and data privacy had lots of mentions
  • Fake News
  • The other obvious one was the impact of automation and robotics both in terms of software algorithms and autonomous devices e.g. driverless cars
  • There was also a lot of concern about hidden (or subconscious) bias in algorithms. This was a significant discussion at the meeting.
  • Digital literacy ensuring that people understanding what the possibilities and issues are. In the discussion we talked about data consumers becoming data citizens
  • There were concerns about regulation keeping up with innovation.
  • Profit motive v human needs i.e. is disruption always good?
  • The impact on the environment of tech e.g. mining rare earths and the constant replacement cycle of tech hardware.
  • Over reliance on technology to the detriment of human interaction resilience and relationships
  • Global tech companies paying their fair share
  • How do we get our young people with the skills to benefit from the digital e.g. software apprenticeships
  • You can’t have a discussion about tech ethics without mentioning the possibility of Skynet and the advent of battlefield robots
  • The other major concern was the opportunities for social engineering by governments through mandating digital first. The China Social Credit System was highlighted by several people. The reason I’ve used the Wired link above is that it does highlight that this is also possible in western societies.
Cambridge Analytica used personal information harvested from more than 50 million Facebook profiles without permission.

Cambridge Analytica used personal information harvested from more than 50 million Facebook profiles without permission.

How does this translate?

My personal priorities are summed up by the word transparency. We need to see how algorithms are written to be able to challenge bias. We also need to see what data is being collected and how it being used. Society also needs to see how money is being made and what purposes profit and taxes are being put to.

What emerged is the All Parliamentary Group on Data Analytics is planning to run a commission (an engagement exercise) on tech ethics. The meeting was a round table to frame the themes that this commission would look at, with the aim of recommending changes to legislation and regulation.

The themes that emerged are:

  1. Trust in the way that software is being built and used, particularly from consumers
  2. Avoiding bias, particularly in algorithms, and ensuring diversity
  3. Public understanding and skills (sort of summed up by the phrase moving people from consumers to citizens)
  4. Data and AI opportunities and risks including civil liberties
  5. The boundaries of acceptable use, particularly AI
Changing the standard

My thoughts during the meeting were that there were some immediate steps that could be taken to improve things without reports or more additional primary legislation.

  1. Fund and staff the regulators properly – particularly the Information Commissioner’s Office to ensure GDPR is implemented and policed properly
  2. We need knowledge about tech embedded in government. Make the changes to the civil service pay grades that have been talked about for years so that the regulators and the public sector can recruit and retain the right level of tech expertise.
  3. Enforce the existing laws and set up dedicated online police to prosecute fraud and hate crime. The model could be similar to the British Transport Police or parts of the the Environment Agency where the industry pays a levy to pay for policing.
  4. We should apply the off-line rules to online business; Uber has been ruled to be a taxi company, so Facebook is a publishing company, Airbnb is a hotel company and Google is a monopoly (like Microsoft in the 90s). Facebook would take its responsibilities much more seriously if it was fined for publishing hate crime and child porn. Applying the age grades for content on YouTube properly would make a big difference. Those of us in tech know that it really isn’t as hard as “big tech” makes out, but it is potentially costly.

My top priority for legislation beyond this is would be to force some key algorithms to be made open source. I’m thinking the monopoly ones or the ones that provide a fundamental utility e.g. internet search

Tell us your thoughts

Do you agree that those outlined above are the most pressing issues in tech ethics? And what’s the best way to ensure these ethics are followed? We would be very interested in everyone else’s thoughts on this and happy to represent these views to the commission if / when it gets going.

The event was one of a series of events organised by the Parliamentary Internet, Communications and Technology Forum (Pictfor) to read more about the event, please see Pictfor’s website here: http://pictfor.org.uk/blog/