Home >Technology peripherals >AI >Challenges and responses to working with artificial intelligence
Today’s on-the-job learning faces challenges. Sophisticated analytics, artificial intelligence, and robots are suddenly intruding into every aspect of the workplace, fundamentally upending this time-honored and effective way of learning. As technology increasingly automates jobs, tens of thousands of people leave or take jobs every year, and hundreds of millions of people must learn new skills and new ways of working. But there is broader evidence that companies deploying smart machines hinders this critical channel of learning: My colleagues and I have found that AI takes away learning opportunities from newcomers and reduces opportunities for veterans to practice, forcing both to master new methods at the same time. and old ways that overwhelm them.
So, can employees learn to work with these machines? Some of the previous observations were from learners who engaged in practices that challenged conventional practices, where these practices were not the focus and where tolerance for their consequences was high. I call this widespread and informal process “covert learning.”
I discovered four common barriers to acquiring the skills you need that trigger covert learning.
In any job, training employees incurs costs and reduces quality because newbies are slow and prone to mistakes. As organizations embrace smart machines, they often have trainees reduce their involvement in risky and complex parts as a management strategy. As a result, trainees are denied the opportunity to stretch the boundaries of their abilities and grow from their mistakes with limited help—precisely the necessary conditions for learning new skills.
The same phenomenon occurs in investment banks. Callen Anthony of New York University discovered in an investment bank that partners used algorithms to assist companies in mergers and acquisitions and interpret valuations, causing junior analysts to become further and further apart from senior partners. The junior analyst's job is simply to extract raw reports from the system (a web-based collection of financial data on companies of interest) and pass them on to senior partners for analysis.
What is the implicit logic of this division of labor? First, to reduce the risk of junior staff making mistakes on complex client-facing work; and second, to maximize the effectiveness of senior partners: the less time junior staff have to explain their work to them, the more they can focus on higher-level analysis. This improves efficiency in the short term, but deprives junior analysts of the opportunity to challenge complex work, makes it more difficult for them to understand the entire valuation process, and weakens the company's future capabilities.
Sometimes smart machines get caught between trainees and their work, and sometimes they prevent experts from doing important practical work. In robotic surgery, the surgeon cannot see the patient's body or the robot for most of the procedure, making it impossible to directly assess and manage critical aspects. For example, in traditional surgery, surgeons are acutely aware of how devices and instruments touch the patient's body and adjust accordingly. But in robotic surgery, surgeons must rely on others to alert them if a robotic arm hits a patient's head, or if a cleaning arm is about to replace an instrument. This has a twofold impact on learning: surgeons are unable to hone the skills needed to fully understand their own work, and surgeons must acquire such new skills through others.
Robotic surgery uses a new set of skills and techniques to achieve the effects that traditional surgery attempts to achieve. It promises greater precision and better ergonomics and is incorporated directly into the curriculum, with residents required to learn both robotics and traditional methods. But the course doesn't give them enough time to master both, which often leads to the worst result: mastering neither. I call this problem methodological overload.
Decades of research and tradition have led trainee doctors to follow the “see one, do one, teach one” approach. But as we have seen, it is not adapted to robotic surgery. Still, the pressure to rely on old-school learning methods is strong, and there are few deviants: Surgical training studies, standard procedures, policies, and senior surgeons all continue to emphasize traditional learning methods, even when it is clearly inappropriate for robotic surgery.
Faced with the above obstacles, it’s no surprise that covert learners quietly bypass or break the rules to gain the guidance and experience they need. Nearly 100 years ago, sociologist Robert Merton discovered that extraordinary measures are used when legitimate means no longer work to achieve worthy goals. The same goes for professional knowledge (perhaps the ultimate goal of a career).
Given the barriers I describe, we should understand that people will learn key skills in other ways. These methods are generally flexible and effective, but they often impose costs on individuals and organizations: implicit learners may be punished, such as losing practice opportunities or status, or causing waste or even harm. But people still take risks again and again because their learning methods work when compliance methods fail. It would be wrong to imitate these extraordinary methods without discernment, but they do have organizational features worth learning from.
As smart technologies become more powerful, covert learning is also developing rapidly. New forms will emerge over time, providing new experiences. It is vital to remain cautious. Covert learners are often aware that what they are doing is unconventional, and they may be punished for what they are doing. (Imagine if a surgical resident made it known that he or she wanted to work with the least skilled attending.) Because it produces results, middle managers often turn a blind eye to these practices as long as the covert learner does not openly acknowledge them. When observers, especially senior managers, declare a desire to study how employees gain skills by breaking rules, learners and their managers may be reluctant to share their experiences. A better solution is to introduce a neutral third party that can ensure strict anonymity and compare practices in different cases. My informants came to know and trust me, and they realized that I was observing work in many work groups and facilities, so they felt confident that their identities would be protected. This is crucial to getting them to tell the truth.
Organizational approaches to intelligent machines often stop at letting individual experts control the work and reduce reliance on trainees. level. Robotic surgical systems allowed senior surgeons to operate with less help, and they did. The investment banking system allows senior partners to exclude junior analysts from complex valuation work, and they do. All stakeholders should insist that organization, technology and work design improve productivity and strengthen OJL. For example, in the Los Angeles Police Department, this will mean changing incentives for patrol officers, redesigning the PredPol user interface, creating new roles to connect police and software engineers, and building an annotated library of best practices cases initiated by police officers.
Artificial intelligence can help learners when they encounter difficulties, provide training to experts as mentors, and cleverly connect the two group. For example, Juho Kim built ToolScape and Lecture-Scape when he was a PhD student at MIT, which can crowdsource annotations for teaching videos and provide clarification and opportunities for users who had previously paused to find annotations. He calls it learner sourcing. On the hardware side, augmented reality systems are starting to bring expert guidance and annotation into workflows.
Existing apps use tablets or smart glasses to add guidance to work in real time. More sophisticated smart systems are expected soon. For example, such a system could overlay footage of a model welder in the factory over an apprentice welder's field of view, showing how the job was done, recording the apprentice's attempts for comparison, and connecting the apprentice to the model welder as needed. While much of the growing community of engineers in these fields is focused on formal training, the deeper crisis is OJL. We need to reallocate our efforts on OJL.
Over thousands of years, technological advances have driven the redesign of work processes, and apprentices have acquired necessary new skills from their mentors. But as we’ve seen, smart machines are now forcing us to disconnect apprentices from mentors and mentors from work in the name of productivity. Organizations often inadvertently choose productivity over employee engagement, so learning on the job becomes increasingly difficult. However, secret learners are looking for risky, out-of-the-box learning methods. Organizations that want to compete in a world of intelligent machines should pay close attention to these “non-conformists.” Their actions can provide insight into how best to get things done in the future when experts, apprentices, and smart machines work and learn together.
The above is the detailed content of Challenges and responses to working with artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!