Algorithms and decision support systems increasingly influence our choices.
They do this in a number of ways, primarily by using our past behavior and the revealed preferences of people similar to us to help filter our options. Take review platforms such as Yelp or Foursquare, for example. It’s nothing new that we follow the advice of our friends and visit the restaurant that most of them recommend. But the internet has made it orders of magnitudes easier to share and aggregate data such as reviews or recommendations. So even though we theoretically have many options, we typically end up choosing the restaurant on top of the list (and/or the one with the largest number of reviews, a new study found).
Soon the world will be dominated by algorithms which predict fairly accurately what we want, prefer and do next. The more data these algorithms gather, the better their predictions will become. It’s not science-fiction to suggest that in the near future, algorithms will know which of our mental (and emotional) buttons to press in order to make us believe, want, or do things. In itself, this is not necessarily bad – after all, we use these algorithms voluntarily because they offer us some kind of benefit. They take some load off our shoulders to sift through options and present us with the most suitable ones – or they present the pieces of information that we are most likely to latch on to.
However, it’s not hard to see that incentives may not be fully aligned: From the perspective of their owners, algorithms are tools to accomplish a goal – such as selling products, keeping our eyes glued on a website, or making us like or dislike a political candidate. In solving these goals algorithms serve us suggestions (such as nuggets of ‘news’ that are designed to stir outrage and polarization) that may not be in line with the goals we set for ourselves (being open-minded and weighing evidence of either side to form a balanced opinion). Our dependency on algorithms can make us subject to manipulation.
There’s need for self-regulation, ethical codices and government intervention to shield against fraud, manipulation and addiction.
But there’s also the need for algorithmic literacy — to enable children, teenagers, and adults to understand, reflect on and wisely interact with algorithms:
- Understand what algorithms are. Most algorithms we encounter in our daily life are ‘prediction machines’: They are models that use existing data to predict missing data. For example, Spotify (or Pandora, or Apple Music) will take the songs you have listened to in the past (existing data) to predict what you will enjoy going forward (missing data). Importantly, algorithms only establish correlations between two items – not necessarily causations.
- Know where algorithms are deployed – today and in the future. In many cases, algorithms are both better (higher accuracy) and cheaper than humans to accomplish prediction tasks. This is especially true in environments that offer a lot of structured data, fairly consistent patterns and only a limited number of possible outcomes. For example, it’s comparatively easy for Google Maps to predict how crowded a restaurant will be at a given time, but quite difficult to forecast rare events such as earthquakes.
- Understand the intent/goals of those owning or deploying the algorithm. Algorithms are used to solve a particular problem, and their economics help understand the motivations behind those who develop or deploy algorithms. For example, a social network’s main algorithm might not show you posts that are necessarily life-enriching: All they care about is how to keep you on the website, so as to maximize the time and likelihood that you interact with their ads.
- Take control of your data and privacy. Algorithms feed on data. Training data is used to create an AI model in the first place, input data used to come up with predictions, and feedback data to refine the model. Some of your data might have already been used as training data, and it is used almost permanently as input data as we browse the web. Privacy regulations such as the GDPR help users gain more control over what data is being used for which purpose, but it’s better to pro-actively think about what data you share about yourself (every single ‘like’ of a statement, post or photo reveals preferences and affination with people, products, ideas and political views).
- Avoid dependency. By overly relying on the decision support provided by algorithms, we risk getting dependent on algorithms and organizations that deploy them. GPS navigation works well as long as the US government grants access to the satellites, you have sufficient battery charge on your mobile device, access to map data and clear line-of-sight to the sky. Miss any of these ingredients, and you are back to reading physical maps.
If the current trend is to continue, algorithms will continue to permeate our work and personal lives. Developing a fundamental understanding of their nature, application area and mechanics will help us use algorithms to our advantage while preserving our autonomy and independence.
About the authors
Simon Mueller is a core member of the Strategy and Operations practice areas at Boston Consulting Group. He was formerly the general manager of the BCG Henderson Institute (BHI), which aims to surface, develop, and apply the next generation of corporate strategy ideas.
Simon advises technology clients on challenges such as business development, marketing, sales, product development, lean manufacturing, procurement strategy, cost reduction, operational efficiency, and corporate development.
At Harvard, where Simon received a Master in Public Administration, he focused on data analytics, technology policy, and performance management. He is the cofounder of the Harvard Future Society, which advises policy makers on the implications of converging technologies, such as artificial intelligence safety and privacy.
Julia Dhar joined The Boston Consulting Group in 2009. She is the cofounder and leader of BeSmart, BCG’s behavioral economics and insights initiative. In this role, she brings her passion and experience designing complex system transformation through nuanced behavioral change to clients in the public and private sectors.
Julia has advised and implemented transformational strategy initiatives across a range of social impact and public sector organizations, including economic development and planning, finance, labor, education, and social welfare. She works with private sector clients to integrate choice architecture and customer insights to improve the productivity, performance, and customer experience of organizations in sectors including airlines and travel and tourism, energy, IT, and telecommunications.
Before joining the firm, Julia worked as private secretary to the deputy prime minister and minister of finance in New Zealand. She also led a major study to increase private capital for public services as a member of the UK cabinet office’s social investment and finance team.
The Decision Maker’s Playbook by Simon Mueller and Julia Dhar is out now, by FT Publishing, priced $21.99 (US) or £16.99 (UK). For more information go to: www.decisionmakersplaybook.com