Home >Technology peripherals >AI >Will artificial intelligence become the new McKinsey?
English version originally published in The New Yorker Photo: CTOCIO/Midjourney
With concerns that artificial intelligence will widen wealth gaps and disenfranchise workers, are there alternatives?
When we talk about artificial intelligence, we always like to use metaphors, just like we often do when describing new or unfamiliar things. Metaphors can quickly help understand new things, but we still need to be cautious because bad metaphors can lead us astray. For example, powerful AI is commonly compared to fairies in fairy tales. This metaphor is meant to emphasize the difficulty of getting powerful entities to obey human commands; computer scientist Stuart Russell cited the parable of King Midas (a king who asked a god to grant his wish to turn everything he touched into gold, which turned out to be a disaster) ) to illustrate the dangers of AI doing what you tell it to do instead of what you really want it to do. The point of King Midas' fable is that greed will destroy you, and the pursuit of wealth will cause you to lose everything that really matters. If your interpretation of this parable is that when God grants you a wish, you should be very careful about expressing your wish, then you are missing the point.
So a more appropriate metaphor for AI risks would be: think of AI as a management consulting firm, such as McKinsey. Just as there are many reasons why companies choose to purchase McKinsey’s services, there are many reasons why people use AI systems. But the parallels between McKinsey—a consulting firm that works with 90 percent of the Fortune 100—and artificial intelligence are also clear. Social media companies use machine learning to keep users engaged in their feeds. Likewise, Purdue Pharma used McKinsey to figure out how to "turbocharge" OxyContin sales during the opioid epidemic. Just as artificial intelligence promises to provide managers with cheap alternatives to human labor, McKinsey and similar firms help normalize the practice of mass layoffs as a way to boost stock prices and executive pay, contributing to the destruction of the American middle class.
One former McKinsey employee described the firm as a “willing executioner of capital”: If you want to do something but don’t want to get your hands dirty, McKinsey will do it for you. Avoiding responsibility is one of the most valuable services a management consulting firm provides. The boss has specific goals but doesn't want to be blamed for doing what's necessary to achieve those goals; by hiring a consultant, management can say they're simply following independent expert advice. Even in its current rudimentary form, artificial intelligence has become a way for companies to avoid responsibility, claiming that it is simply doing what an “algorithm” tells it to do, even if it was the company that commissioned the algorithm in the first place.
The question we should be asking is: As artificial intelligence becomes more powerful and flexible, is there a way for it not to become another version of McKinsey? This question is worth considering the different meanings of the word “AI.” If you think of AI as a broad set of technologies that are marketed to companies to help them reduce costs, the question becomes: How do we prevent these technologies from being used as “capital "The executioner"? Or, if you think of AI as a semi-autonomous software program that can solve problems that humans ask it to solve, the question is: How do we prevent AI from helping businesses and capitalists in ways that make people's lives worse? ? Suppose you've built an AI that's semi-autonomous and completely obedient to a human - one that double-checks to make sure it doesn't misinterpret the instructions it receives. This is the dream of many artificial intelligence researchers. However, such AI software could still easily cause as much harm as McKinsey.
Please note that you cannot understate that the AI solution you will develop is only for the problem you asked it to solve and will benefit society. This is equivalent to saying that you can neutralize the threat of McKinsey by starting a "positive AI" consulting company. In fact, Fortune 100 companies will hire McKinsey instead of your "positive" company that benefits society because McKinsey's solutions increase shareholder value more than your company's solutions. One can always develop AI that pursues shareholder value above all else, and most companies would rather use such an AI than your AI based on "positive energy" principles.Is there a way to prevent artificial intelligence from being the grindstone of capitalism? To be clear, when I refer to capitalism, I am not talking about the exchange of prices for goods or services, whereas market-determined prices are a feature of many economic systems. When I refer to capitalism, I am referring to a specific relationship between capital and labor in which wealthy people, or capitalists, are able to profit from the efforts of others. So, in the context of this discussion, whenever I criticize capitalism, I'm not criticizing the idea of selling stuff; I'm criticizing the idea of wealthy people exerting power over industrial workers. More specifically, I criticize the increasing concentration of wealth in the hands of an ever-increasing number of people.
The current AI industry works quite hard to analyze tasks performed by humans and try to find ways to replace humans. Coincidentally, this is also the problem that management (or farmers) desperately want to solve. As a result, AI benefits capital at the expense of labor. There is really nothing that promotes the interests of workers better than a labor consulting firm. Is it possible for artificial intelligence to take on this role? Will AI help workers rather than managers?
Some may say that it is not the job of artificial intelligence to fight against capitalism. That may be true, but it’s not AI’s job to strengthen capitalism either. However, that's what it currently does. If we can’t figure out ways for AI to help reduce wealth concentration, then it’s hard to define AI as a neutral technology, let alone a beneficial one.
Many people believe that artificial intelligence will cause more unemployment and propose Universal Basic Income (UBI) as a solution to the problem. Overall, I like the idea of a Universal Basic Income; however, over time, I have begun to question the motivations of AI practitioners seeking UBI as a solution to the mass unemployment caused by AI. It would be different if we already had a universal basic income, but the fact is that we don't, so expressing support for UBI seems like a way for AI developers to pass the buck to the government, and in fact, AI developers are exacerbating the ills of capitalism with the expectation that when the problems become severe enough, governments will have no choice but to step in. The strategies and visions of AI technology companies to “make the world a better place” need to arouse our suspicion and vigilance.
You may recall that in the lead-up to the 2016 election, actress Susan Sarandon, an avid Bernie Sanders supporter, said that voting for Donald Trump was more Better to vote for Hillary Clinton because it will speed up the “revolution.” I don’t know how deeply Sarandon thought about this, but the Slovenian philosopher Slavoj Žižek said the same thing, and I’m sure he thought about it a lot. He argued that Trump’s election would be a huge shock to the system, bringing about change.
What Žižek advocates is an example of an idea in political philosophy known as accelerationism. There are many different versions of accelerationism, but what left-wing accelerationists have in common is the belief that the only way to make things better is to make them worse. Accelerationism holds that attempts to oppose or reform capitalism are futile; instead, we must exacerbate capitalism’s worst tendencies until the entire system collapses. The only way to transcend capitalism is to step on the neoliberal accelerator until the engine explodes.
I guess this is one way to create a better world, but if this is the approach the AI industry is taking, I want to make sure everyone is clear about what they are working towards. By building artificial intelligence to do jobs previously done by humans, AI researchers are increasing the concentration of wealth to such extreme levels that the only way to avoid social collapse is for government intervention. Intentionally or not, this is very similar to voting Trump's goal is to create a better world. Trump’s rise illustrates the risks of accelerationism as a strategy: Things can get really bad, and for a long time, before they get better. The truth is, you have no idea how long it will take for things to get better;
I’m less convinced that AI poses a danger to humans (AI may set its own goals and prevent us from shutting it down). However, I do think AI is dangerous because it increases the power of capitalism. The doomsday scenario is not about creating artificial intelligence that turns the entire planet into paper clips, as one famous thought experiment imagined. AI-driven companies are destroying the environment and the working class in their pursuit of shareholder value. Capitalism is a machine that will stop at nothing to prevent us from shutting it down, and its most successful weapon is its movement to prevent us from considering any alternatives.
People who criticize new technologies are sometimes called Luddites, but it helps to be clear about what Luddites really want. Their main protest was that their wages were falling, while at the same time factory owners' profits were rising and food prices were rising. They also protested unsafe working conditions, the use of child labor and the sale of inferior goods, which undermined the credibility of the entire textile industry. The Luddites did not destroy machines indiscriminately; if the owners of the machines paid their workers well, they would treat the machines well. Luddites are not against technology; they want economic justice. They destroyed machinery to get the attention of factory owners. In fact, the word "Luddite" is now used as an insult, a way of calling someone irrational and ignorant,
Whenever someone accuses someone else of being a Luddite, it's worth asking, are the people being accused actually opposed to technology? Or do they favor economic justice? Do the people making the accusations truly support improving people's lives? Or do they simply want to increase private capital accumulation?
Today we find ourselves in a situation where technology has become conflated with capitalism, and capitalism has become conflated with the very concept of progress. If you try to criticize capitalism, you are accused of being both anti-technology and anti-progress. But what does progress mean if it doesn’t include a better life for working people? What’s the point of being more efficient if the money saved does nothing but deposit it into shareholders’ bank accounts? We should all strive to be Luddites because we should all care more about economic justice than increasing private capital accumulation. We need to be able to criticize harmful uses of technology—including those that benefit shareholders rather than workers—without being characterized as opponents of technology.
Imagine an idealized future where, a hundred years from now, no one is forced to do any job they don’t like, and everyone can spend their time doing the things they find most fulfilling. It's obviously hard to see how we get from here to there. But now consider two possible scenarios in the next few decades: First, the power of management and capital is stronger than it is now. On the other hand, the workforce was more powerful than it is now. Which of these seems more likely to bring us closer to an idealized future? And, as currently deployed, which way will AI push us?
Of course, some people believe that in the long run, new technology will improve our living standards, thereby making up for the unemployment it causes in the short term. This argument was important for much of the post-Industrial Revolution period, but has lost its power over the past half century. In the United States, GDP per capita has nearly doubled since 1980, while median household income lags far behind. That period encompassed the information technology revolution. This means that the economic value created by personal computers and the Internet mainly serves to increase the wealth of the rich 1% rather than improving the living standards of the entire American citizenry.
Of course, we all have the internet now, and the internet is great. But real estate prices, college tuition and health care costs are all rising faster than inflation. In the 1980s, it was common to support a family on one income; now it is rare. So how much progress have we made in the past four decades? Sure, online shopping is fast and convenient, and watching movies at home is cool, but I think a lot of people would be willing to trade those conveniences for the ability to own their own home, send their kids to college without being saddled with lifelong debt, and not end up with sick Bankrupt hospitals. It's not technology's fault that median income hasn't kept pace with GDP per capita; it's mostly the fault of Ronald Reagan and Milton Friedman. But some responsibility also falls on the management policies of CEOs such as Jack Welch, who ran General Electric from 1981 to 2001, and consulting firms such as McKinsey. I’m not blaming PCs for rising wealth inequality—I’m just saying that the idea that better technology will necessarily improve people’s living standards is no longer credible.
The fact that personal computers have not increased median income is particularly important when considering the possible benefits of artificial intelligence. It is often suggested that researchers should focus on how AI can increase the productivity of individual workers rather than replace them; this is known as the augmentation path rather than the automation path. That's a worthy goal, but by itself it won't improve people's economic fortunes. Productivity software running on personal computers is a perfect example of augmentation rather than automation: word processing programs replaced typewriters rather than typists, and spreadsheet programs replaced paper spreadsheets rather than accountants. However, the increase in personal productivity brought about by personal computers has not been accompanied by an increase in living standards.
The only way technology can improve living standards is to develop appropriate economic policies to properly distribute the benefits of technology. We haven’t had these policies for the past four decades, and unless that changes, there’s no reason to think that the coming advances in AI will raise median incomes, even if we can figure out how to make it increase the productivity of individual workers. Artificial intelligence will certainly reduce corporate labor costs and increase corporate profits, but this is two different things from improving our living standards.
It sounds good if we can assume that a utopian future is around the corner and develop the technology for that future. But the fact that a technology was useful in a utopia doesn't mean it's useful now. In a utopia with machines that convert toxic waste into food, generating toxic waste would not be a problem, but, here and now, no one can claim that generating toxic waste is harmless. Accelerationists might argue that generating more toxic waste will inspire the invention of waste-to-food converters, but how convincing is that? We evaluate the environmental impacts of technologies in the context of currently available mitigation measures, not in the context of hypothetical future mitigation measures. By the same token, we cannot evaluate AI by imagining how helpful it will be in a UBI world; we must evaluate it against the existing imbalance between capital and labor, in this case AI for its assistance capital becomes a threat.
One former McKinsey partner defended the firm's actions by saying, "We don't make policies. We enforce them." But that's a weak excuse; more likely when a consulting firm or new technology offers the means to implement it. Making harmful policy decisions. Versions of artificial intelligence currently being developed would make it easier for companies to lay off workers. So is there any way to develop artificial intelligence that is less likely to lead to layoffs?
In his book How to Be an Anticapitalist in the 21st Century, sociologist Erik Olin Wright provides a taxonomy of strategies for dealing with the dangers of capitalism. The two strategies he mentions are crushing capitalism and dismantling capitalism, which are probably outside the scope of this discussion. What is more relevant here is taming capitalism and resisting it. Roughly speaking, taming capitalism is government regulation, and resisting capitalism is grassroots activism and unions. Is there a way for AI to enhance these things? Is there a way for AI to empower unions or worker-owned cooperatives?
In 1976, employees at Lucas Aerospace in Birmingham, England, faced redundancy due to defense spending cuts. In response, the shop director developed a document called the Lucas Plan, which described 150 "socially useful products", from dialysis machines to wind turbines and car hybrid engines, that the workforce could use its existing skills and equipment instead of being fired. Lucas Aerospace's management rejected the proposal, but it remains a famous modern example of workers trying to steer capitalism in a more humane direction. Modern computing technology certainly makes something similar possible.
Does capitalism have to be as harmful as it is? Maybe not. The thirty years after World War II are sometimes called the golden age of capitalism. This period was partly the result of better government policies, but the government did not create the golden age itself: the corporate culture of this era was different. In GE's 1953 annual report, the company boasted how much it paid in taxes and how much it spent on payroll. It made clear that "maximizing employment security is the company's primary goal." The Johnson & Johnson founder said the company has a higher responsibility to its employees than to its shareholders. Companies then had a very different concept of their role in society than companies do today.
Is there a way to get back to these values? This may seem unlikely, but remember, the golden age of capitalism came on the heels of the great wealth inequality of the Gilded Age. Now that we live in the Second Gilded Age, wealth inequality is about the same as it was in 1913, so it’s not impossible that we’ll move from where we are now into a Second Golden Age. Of course, between the first Gilded Age and the Golden Age, we had the Great Depression and two world wars. Accelerationists might say that these events were necessary for the arrival of the Golden Age, but I think most of us would prefer to skip these steps. The task before us is to imagine how technology can propel us into a golden age without first bringing about another Great Depression.
We (Editor’s note: referring to the author and the people of his country) all live in a capitalist system, so whether we like it or not, we are all participants in capitalism. And there are reasons to doubt whether you as an individual can do anything about it. If you were a food scientist at Frito-Lay and your job was to invent new flavors of potato chips, I wouldn't say you were an accomplice to the engine of consumerism and had a moral obligation to quit. You're using your training as a food scientist to provide your customers with a delightful experience; it's a perfectly reasonable way to make a living.
But many people working on AI see it as more important than inventing new flavors of potato chips. They say this is a world-changing technology. If that’s the case, then they have a responsibility to find ways for AI to make the world a better place, rather than making it worse in the first place. Beyond pushing us to the brink of social collapse, can AI improve the inequalities of our world? If AI is as powerful a tool as its proponents claim, they should be able to find other uses for it than exacerbating capital’s ruthlessness.
If we should learn one lesson from the story of the elf who helped the king get his wish at the beginning of this article, it's that the wish to get something without any effort is itself the real problem. Consider the story of "The Sorcerer's Apprentice," in which the apprentice cast a spell to make broomsticks lift water, but could not make them stop. The lesson of the story is not that magic is uncontrollable: at the end of the story, the wizard returns and immediately cleans up the mess the apprentice made. The lesson is: you can’t get away from hard work. The apprentice was trying to escape his chores, looking for shortcuts and getting into trouble.
People tend to think of AI as magical problem solvers, which points to a widespread desire to avoid doing the hard work of building a better world. This hard work will involve tackling issues like wealth inequality and taming capitalism. These are the hardest jobs for technologists and the tasks they most want to avoid, calling into question the assumption that more technology is better and shaking up the idea that technologists can continue to immerse themselves in the AI revolution and everything will work out on its own. Resolving beliefs. No one wants to believe that they are complicit in the world's injustice, but it's this kind of critical self-reflection that's necessary for those developing world-shaking technologies. Their willingness to face up to and examine their role in the system will determine whether AI will bring about a better world or a worse world.
The above is the detailed content of Will artificial intelligence become the new McKinsey?. For more information, please follow other related articles on the PHP Chinese website!