
Last week, I had written that workers face a triple whammy from modern capitalism. The first is a proliferation of meaningless jobs. The second is that these jobs do not pay well as they are outsourced to contractors and the third that technological developments threaten to displace workers or eliminate many jobs themselves. On the day the column was published, Citigroup announced that it had planned to eliminate up to half of its 20,000 technology and operations jobs because those jobs could be automated. Up to two-fifths of investment banking jobs were deemed “most fertile for machine processing”.
For millions in the world, work provides them dignity and meaning to their existence. To rob them of it in the name of profits and shareholder value maximization is to tear apart the fabric of civilized societies. Workers are not pests to be removed by the application of fertilizer machines. That is myopia. However, the impact of technological developments on jobs and wages is not an isolated phenomenon. It is occurring together with rising market concentration and rising mark-ups.
The International Monetary Fund (IMF) has a forthcoming paper on market concentration and its macroeconomic implications. The paper has found that markups have gone up substantially in advanced economies since the 1980s. It is driven by “superstar” firms that have increased their market power. Silicon Valley giants have enormous market share in their respective businesses. They have become near-monopolies. Monopolies invest less and innovate less. In fact, they stymie innovations in upstart enterprises lest they grow up to become competitive threats. Google preventing other companies from doing what it did to Microsoft was the subject of a marvellous article in The New York Times in February (“The Case Against Google”). Further, “buying emerging start-ups to reduce competition is another example of the misuse of productive investment to maintain monopoly rents” (“Are Monopolies A Threat To The United States?”, Banque de France, 13 February). That is not “new capital formation”, strictly speaking. Finally, “with higher market power, the share of firms’ revenue going to workers decreases” (“Chart of the Week: The Rise Of Corporate Giants”, IMF Blog, 6 June ).
In this regard, it was rather “refreshing” to read Our Final Hour by Sir Martin Rees who presided over the Royal Society (UK’s national academy of sciences) from 2005-10. He will be celebrating his 76th birthday in less than a week. It was “refreshing” not because the message was positive or optimistic about humans. It was not. For him, the odds are no better than 50% chance that our species lasts until the end of this century. The theme of the book is that technical advances render societies vulnerable to disruption. Throughout the book, his refrain is for science to voluntarily restrain itself from certain pursuits where the benefits are uncertain and the costs, if they occur, could be tremendous even if the probability attached to the costs is small. The same logic applies to technological developments such as robotics and Artificial Intelligence.
One of the reasons why scientists are generally reluctant about stopping science is that most of the beneficial discoveries have been serendipitous (think X-ray). However, since technology is, in a way, applied science, this excuse does not apply to technology. One can choose to apply the science in areas where the net social benefits are unambiguously high. Sir Rees writes, “The views of scientists should not have special weight in deciding questions that involve ethics or risks: indeed, such judgements are best left to broader and more dispassionate groups.” Doesn’t this injunction apply with greater force to commercially-minded corporations developing and applying technologies?
Many readers of this newspaper might not remember the article, “Why The Future Does Not Need Us”, written by Bill Joy, co-founder of Sun Microsystems, in April 2000. The message of his paper is that humanity is yet to come to terms with the fact that “the most compelling 21st-century technologies—robotics, genetic engineering and nanotechnology—pose a different threat than the technologies that have come before. They can self-replicate.” Nearly two decades later, we still have not come to terms.
Before 2008, firms on the east coast of America brought its economy to the brink of collapse. They are still at it. Now, they have been joined by firms on the west coast whose products and business practices have emerged as threats both to the economy and the society. With respect to the downside of technology and finance, there has mostly been a conspiracy of silence on the part of relevant policymakers.
In April, at the spring meetings of the World Bank and IMF, there was a panel discussion on “Digitalisation and the New Gilded Age” moderated by Christine Lagarde, president of the IMF. The discussions did not address the issue of whether we are in a digitization-led gilded age and, if so, what to do about it, let alone confront some of the broader questions thrown up by digitization examined here.
Some of my friends-readers look for answers as though I have them and I am holding them back. Let me try. The answers have to start with the admission that there is an insatiable quest for profits, power and dominance. Techno-optimism or hubris fuels that quest. As Carl Sagan put it in his “Pale Blue Dot”, the answer lies in placing limits on what may and what must not be done.
V. Anantha Nageswaran is an independent consultant based in Singapore. He blogs regularly at Thegoldstandardsite.wordpress.com. Read Anantha’s Mint columns at www.livemint.com/baretalk
Comments are welcome at baretalk@livemint.com