Big Data and Algorithms Need Our Moral Compass. Here’s Why.

Artificial Intelligence, Big Data, Machine Learning are increasing the Need for our Moral Compass. Article on LeadershipWatch.

Big data and complex algorithms are all around us. Machines are getting smarter by the day. They can help us make better decisions. Make our lives easier. But is machine intelligence always right?

Big data and computer algorithms are all around us

The big data era, where large amounts of data are used to analyze, understand, and predict developments in real time, has clearly begun. Business leaders increasingly turn to computation to get faster and more accurate answers to questions like:

‘Which news item or which movie should we recommend to people?’;

‘What product is this person most likely to buy?’; or even

‘Who should the company hire?’

Big data and algorithms have become the new gold of the information age.

Machines are getting smarter by the day

“Recently”, explains technosociologist Zeynep Tufekci (@Zeynep) in a brilliant TED-talk (you can watch her talk below) “complex algorithms have made great strides. They can recognize human faces. They can decipher handwriting. They can detect credit card fraud and translate between languages. They can detect tumors in medical imaging. They can beat humans in chess and Go.

Much of this progress comes from a method called “machine learning.” Machine learning is different than traditional programming/coding, where you give the computer detailed, exact, painstaking instructions. It’s more like you take the system and you feed it lots of data, including unstructured data, like the kind we generate in our digital lives. And the system learns by churning through this data.”

But is machine intelligence always right?

“Consider a hiring algorithm”, Zeynep Tufekci goes on to explain, “a system used to hire people, using machine-learning systems. Such a system would have been trained on previous employees’ data and instructed to find and hire people like the existing high performers in the company. Sounds good. I once attended a conference that brought together human resources managers and executives, high-level people, using such systems in hiring. They were super excited. They thought that this would make hiring more objective, less biased, and give women and minorities a better shot against biased human managers (..)

Now, I have a friend who developed computational systems to predict the likelihood of clinical or postpartum depression from social media data. The results are impressive. Her system can predict the likelihood of depression months before the onset of any symptoms – months before. No symptoms, there’s prediction. She hopes it will be used for early intervention. Great! But now put this in the context of hiring.

So at this human resources conference, I approached a high-level manager in a very large company, and I said to her: “Look, what if, unbeknownst to you, your system is weeding out people with high future likelihood of depression? They’re not depressed now, just maybe in the future. What if it’s weeding out women more likely to be pregnant in the next year or two but aren’t pregnant now? What if it’s hiring aggressive people because that’s your workplace culture?” You can’t tell this by looking at gender breakdowns. Those may be balanced. And since this is machine learning, not traditional coding, there is no variable there labeled “higher risk of depression,” “higher risk of pregnancy,” “aggressive guy scale.” Not only do you not know what your system is selecting on, you don’t even know where to begin to look. It’s a black box. It has predictive power, but you don’t understand it. “What safeguards,” I asked, “do you have to make sure that your black box isn’t doing something shady?”

She stared at me and said: I don’t want to hear another word about this.” And she turned around and walked away. Mind you – she wasn’t rude. It was clearly: what I don’t know is not my problem, go away, death stare.

Is this the kind of society we want to build, without even knowing we’ve done this, because we turned decision-making to machines we don’t totally understand?”

Big data and machine learning technologies will soon make their presence felt in the financial services, insurance, and healthcare industries (to name just a few),

The effects on all of us will be profound. This piece by Bernard Marr (@BernardMarr) will make you quickly grasp what lies ahead: 3 Industries that will be Transformed by AI, Machine-Learning, and Big Data in the Next Decade

Yes, computation and big data can help us make faster and better decisions.

But can we really afford to step away from difficult questions and dilemmas, which will inevitably arise?

“We cannot outsource our responsibilities to machines. We must hold on ever tighter to human values and human ethics.” – Zeynep Tufekci

Or should we rather do the opposite and step in?

By educating ourselves on how these technologies work, and what they can and cannot offer.

By demanding and giving meaningful transparency (read more here: Leading Change – We Need More Transparency).

By cultivating and using our moral compass.

Aad and I agree with Zeynep Tufekci: “Machine intelligence make human morals more important.” How about you?

 

Zeynep Tufekci (@zeynep) is an expert on the social impacts of technology. She is an assistant professor in the School of Information and Library Science at the University of North Carolina, a faculty associate at the Berkman Center for Internet and Society at Harvard, and a former fellow at the Center for Internet Technology Policy at Princeton. Her research revolves around politics, civics, movements, privacy and surveillance, as well as data and algorithms.

Liked this article? Use the subscription button (PC: see right-hand sidebar; Mobile: find button below this article) to stay up to date with LeadershipWatch articles and news. Your personal information will be kept strictly confidential.

Photo: Tao Tsai/Flickr (Creative Commons)


Leadership Expert Series Logo in green with worldmap and compassThis article is part of our ‘Skills for the future’ Expert Series in which we share valuable insights, pointers and lessons from a list of business leaders, experts and role models selected by Hanneke Siebelink. Find Expert Series articles here.


Hanneke SiebelinkHanneke Siebelink is Research Partner and Writer at HRS Business Transformation Services, and author of several books. Find out more about Hanneke and HRS services. If you would like to invite us to your organization, contact us here.

One Comment on “Big Data and Algorithms Need Our Moral Compass. Here’s Why.

  1. Thanks for this interesting insight. Moral questions linked to machine-learning interestingly come with economic, cultural and social questions that also seem to miss a clear answer. The reaction of the woman who turns away once a specialist tries to ask relevant but puzzling questions perfectly symbolizes our own limited ability to try and figure out intelligent answers. If workers are taught to optimize their work through automated machine learning and they are being told it will increase their results and efficiency on the one side, and if on the other side they are pressured to reach higher objectives, well, they will indeed follow that lady and say they do not want more moral discussions.

    Like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.