Limitations of Algorithms

Algorithms are coming to coordinate an ever larger swath of human activity from what media we get recommended to the routing of city traffic

For better or worse our world is in the midst of a silent algorithmic revolution. Many of the decisions that humans once made are being handed over to mathematical formulas. Today, we expect algorithms to provide us with the answer—who to date, where to live, how to deal with an economic problem. With the correct algorithms, the idea is, computers can drive cars better than human drivers, trade stocks better than Wall Street traders and deliver to us the news we want to read better than newspaper publishers. As Karen Yeung of King’s College London proposes1 “Just in the way that social theorists have identified markets on the one hand and bureaucratic hierarchy on the other as two quite different ways to coordinate social activity. I’m going to suggest too that algorithmic regulation can be understood as a third form of social ordering… it’s different because it’s underlying logic is driven by the algorithm which is mathematical.” It would appear that more and more authority is shifting to these automated systems, it is then important for us to ask what are the consequences of that.

Formal Systems

The fundamental issue is really that of the interaction between the informal world of people and the formal world of computers and trying to translate between these different systems of organization. With cloud computing and advanced analytics, we are extending computation out into the real world like never before. Through these platforms and the algorithms that operate on them we are trying to bring in and coordinate more and more spheres of our social, economic and technological systems of organization. As we do this we are trying to take an informal world that has evolved over a prolonged period and bring it into the world of formal systems.

The math and science framework out of which we build our algorithms is unfortunately not a universal language but is very much a partial language that is heavily dependent upon a reductionist paradigm that creates many limitations in its capacities, the resulting models are not in anyway a neutral interpretation of reality. Just as data can deceive, models likewise can deceive. All paradigms and theories are only ever partial accounts of reality and the models that derive from them are never neutral, they reflect the particular paradigm upon which they are based, no matter how rock solid you think the maths is all of our mathematical frameworks are incomplete. All models are based on opinions and perspectives about the way the world is, some of those are better than others but none are complete. It doesn’t matter how fancy and internally consistent the logic of the model is, all models are ultimately incomplete and thus dependent upon opinions and stories about the way the world is.

The mathematics that we know is beautiful, pure and logically consistent until you look out the window and realize that the world is not full of triangles, squares and smooth curves, and that’s the problem as we bring this technology out into the world we will stay hitting that gap. Even if the process is automated, the algorithms used to process the data are imbued with particular values and contextualized within a particular scientific approach. This is not just on a fundamental level of the underlying science but also on a more practical level.

Algorithms can not do anything on their own they reflect the social and institutional truths of the world in which we live. If, for example, society is racist then that will be in the data and the algorithm will pick up on that and it will also pick up on and amplify any other bias. Take for example an employer trying to figure out who to hire, given that men have always been more successful in certain career fields than women, because of all kinds of institutional bias, then the algorithm is likely to just reflect that, it’ll just tell you, you should hire these men because they’ve been more successful in the past. So there are lots of ways in which algorithms can reflect both our incomplete knowledge and also reflect the bias that we live with every day, and they will hide these behind a guise of neutrality and objectivity. But also because of the scale, scope, and power of the technology they can potentially have mass effects and this is something, unfortunately, we will inevitably find out more about as we build out this IT infrastructure to our global economy.

Weapons of Math Destruction

Weapons of math destruction are algorithms that have a large-scale social impact and negative externalities

The negative externalities of these algorithms are outlined in a recent book by Cathy O’Neil called Weapons of Math Destruction2. She defines weapons of math destruction as mathematical models or algorithms that attempt to quantify socially important aspects: creditworthiness, teacher quality, insurance claims, college rankings, employment application screeners, policing and sentencing algorithms, workplace wellness programs etc.  but have harmful outcomes and often reinforce inequality, keeping the poor poor and the rich rich. In the book, she expands on stories of people who have been marked as low ranking in some way by an algorithm. Such as the competent teacher who is fired due to a low score on a teacher assessment tool, the people whose credit card spending limits were lowered because they made purchases at certain shops, the college student who couldn’t obtain a job at a grocery store due to his answers on a personality test. The algorithms that judge and rate them are completely opaque and can not be questioned. People often have no capacity to contest when the algorithm makes a mistake. The author lists three common characteristics of these weapons of math destruction. They are often proprietary or otherwise shielded from prying eyes so that they are in effect black boxes. They affect large numbers of people, increasing the chances that they get it for some of them and they have a negative effect on people.

Black Box

Most platforms are privately owned enterprises and do not wish to expose the internal workings of their algorithms to the view of the end user. Added to this the complexity of these systems often overwhelms people’s capacity to comprehend them. In this respect subprime mortgages are a perfect example of a WMD. Most of the people buying, selling, and even rating them had no idea how risky they were. This, of course, extends to the whole of the financial market where one can only speculate about what algorithms might be out there. Machine learning algorithms operate in high dimensional space in order to process possibly millions of parameters, which is hard for us as humans to comprehend. Communicating such things to humans will require new uses of visualization so that people can quickly understand in an innovative way how the system works.

The only sustainable way to develop these systems is by keeping people informed and engaged. If we want to develop these technologies in a sustainable way then we need a system of design that includes transparency and accountability. That means integrating the language of the machine and that of the human by creating visualizations and other methods that can quickly and intuitively communicate what the underlying technology is doing. Technological crisis inevitably occurs when technology becomes too complicated, too coupled and obfuscated and something goes wrong in the system as a whole.

Contextualization & Optimization

The other issue is that of decontextualization which results from the narrow form of intelligence that analytics represents. The problem with analytics is that it decontextualized. By focusing on things it isolates them from their context and leaves them open to misinterpretation. By simply looking at them from one perspective we can gain greater detail from that perspective but we can also lose the relevant connections that give it its full meaning and implications. Advanced data analytics enables us to see farther, to focus more clearly, to pick out the needle in the haystack. However, the more powerful we make the telescope, the more focused we become and the more decontextualized the information becomes, which is an issue, as it becomes easier and easier to optimize for a single parameter but create more and more negative externalities on those other metrics that are not captured due to the narrowing of our vision.

Finance is a very good illustration of this, because of the quantitative and complex nature of financial markets it has been probably the most advanced user of algorithms and a good illustration of where we are heading with the technology. Finance is obviously very focused on optimizing for monetary outcomes. Real world economic goods like, food and energy get brought into the financial system and made available for trading, algorithms them operate on them with the sole focus of optimizing for profit, but the consequences of that can be food riots in Egypt when the price of grain goes too high, or it can be elderly people in Canada who can’t afford the price of their heating gas during winter because of speculation.3 As we start to connect everything up through cloud platforms, more and more we are operating within very complex systems and narrow algorithmic optimization in one place can lead to unintended consequences in another.

Predictive Limitations

Algorithms base their future projections on past data and are thus always in essence conservative being unable to tell us much about a future that might look radically different from the past

It is also important to note that algorithms are analytical tools, they are built out of the analytical capacities of digital computers. Analytics always acts on data, all algorithms take in data and perform some operation on it. However data is always from the past, there is no such thing as data from the future. This has the implication that these models can only tell us about a future that resembles the past. Of course, the algorithm can perform operations on past data to define a future that looks different from the past, however, these models are inherently not designed to tell us about a future that is qualitatively different from the past. We can put all sorts of nonlinear and stochastic stuff into those models to make them look more like what happens in the real world, but at the end of the day, their essential reference point is past data, which makes them inherently conservative. This is fine if the system you are dealing with is in a normal state of development, but that is not always the case, sometimes major changes happen and the model is unlikely to tell us much about that, this is exemplified by the extraordinary poor predictive capacity of economic models in relation to financial crises.

In order to get a future that looks qualitatively different from the past you need a theory, data and analytics are not going to help you with that. As such data without theory can lock you into the past. Analytics will always tend towards reinforcing past patterns. Data analytics systems don’t know what might be, what could be, or what we might want to be; as such they often create self-fulfilling path-dependencies. In an article in Harvard Business Review entitled Learning to Live with Complexity4 the authors note “in complex systems, events far from the median may be more common than we think. Tools that assume outliers to be rare can obscure the wide variations contained in complex systems. In the U.S. stock market, the 10 biggest one-day moves accounted for half the market returns over the past 50 years. Only a handful of analysts entertained the possibility of so many significant spikes when they constructed their predictive models.”

This is one of the key problems with an over-reliance on analytical reasoning, it tells us that the future will be similar to the past and because it often is it lulls us into a false sense of security. Even though major changes happens rarely, because they can be so large and the incremental changes are typically so small, the unpredictable paradigm shifts end up being more significant than all the linear incremental changes that the system predicted so well. To create real change, change in paradigm we need something qualitatively different. Visions, imagination, and theories can inform us of futures that have never existed while algorithms are not really designed to deliver that. We could try to get computers to think “outside the box” but this is not what analytical reasoning is designed for, it is like trying to put a screw into a piece of wood with a hammer, you will get better results in the long run if you invest in using the correct tool for the correct application.

1. Karen Yeung – Algorithmic Regulation – The Frontiers of Machine Learning. (2018). YouTube. Retrieved 12 February 2018, from https://www.youtube.com/watch?v=joep4KN-Kv0&t=17s

2. (2018). Amazon.com. Retrieved 12 February 2018, from https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815

3. How algorithms shape our world – Kevin Slavin. (2018). YouTube. Retrieved 12 February 2018, from https://www.youtube.com/watch?v=ENWVRcMGDoU

4. Learning to Live with Complexity. (2011). Harvard Business Review. Retrieved 12 February 2018, from https://hbr.org/2011/09/learning-to-live-with-complexity

2018-02-13T08:21:20+00:00