Home > Article > Technology peripherals > How edge computing helps enterprises reduce costs and increase efficiency
Growing hopes for edge computing have filled the industry with bold ideas, such as “the edge will eat the cloud” and that real-time automation will become ubiquitous in healthcare, retail and manufacturing.
Today, more and more experts believe that edge computing will play a key role in the digital transformation of almost every enterprise. But progress has been slow. Traditional thinking prevents companies from taking full advantage of real-time decision-making and resource allocation. To understand how and why this happened, let’s take a look back at the first wave of edge computing and what’s happened since then.
First Wave of Edge Computing: Internet of Things (IoT)
These data streams must then be correlated with what is commonly known as sensor fusion. At the time, sensor economics, battery life, and ubiquity often resulted in data streams that were too limited and of low fidelity. Additionally, retrofitting existing equipment with sensors is often costly. While the sensors themselves are inexpensive, installation is time-consuming and requires trained personnel to perform. Finally, the expertise required to analyze data using sensor fusion is embedded in the knowledge base of employees across the organization. This has led to a slowdown in IoT adoption.
In addition, concerns about security also affect the large-scale application of the Internet of Things. The calculation is simple: Thousands of connected devices across multiple locations equate to a massive and often unknown amount of exposure. With the potential risks outweighing the unproven benefits, many believe it is prudent to take a wait-and-see approach.
Beyond IoT 1.0
but about operations across distributed sites and geographies Real-time decisions. In IT and increasingly in industrial environments, we refer to these distributed data sources as the edge. We call decision-making from all of these locations outside of the data center or cloud edge computing.
Nowadays, the edge is everywhere
—where we live, where we work, and where human activities take place. Sparse sensor coverage has been addressed with newer and more flexible sensors. New assets and technologies come with a wide range of integrated sensors. Sensors are now often augmented with high-resolution/high-fidelity imaging (X-ray equipment, lidar). The reason is simple: there is not enough available bandwidth and time between the edge location and the cloud. Data at the edge is most important in the short term. Data can now be analyzed and consumed in real-time at the edge, rather than processed and analyzed later in the cloud. To achieve new levels of efficiency and superior operational feedback, computing must occur at the edge. This is not to say that the cloud is irrelevant. The cloud still plays an important role in edge computing because of the ability to deploy and manage it across all locations. For example, the cloud provides access to applications and data from other locations, as well as remote experts to manage systems, data, and applications around the world. Additionally, the cloud can be used to analyze large data sets across multiple locations, show trends over time, and generate predictive analytics models. Therefore, edge technology is about dealing with big data flows in large numbers of geographically dispersed locations. One must adopt this new understanding of the edge to truly understand what is now possible with edge computing. Today: Real-time Edge AnalyticsIt’s amazing what can be done at the edge today compared to just a few years ago. Now, data can be generated from a large number of sensors and cameras, rather than being limited to a few. This data will then be analyzed on computers thousands of times more powerful than those of 20 years ago - all at a reasonable cost. High core count CPUs and GPUs along with high-throughput networks and high-resolution cameras are now readily available, making real-time edge analytics a reality. Deploying real-time analytics at the edge (where business activity occurs) helps enterprises understand their operations and react immediately. With this knowledge, many operations can be further automated, thereby increasing productivity and reducing losses. Here are some of today’s real-time edge analytics use cases:Many supermarkets now use some form of self-service checkout, and unfortunately, they are also seeing an increase in fraud incidents. Some unscrupulous shoppers can substitute cheaper barcodes for more expensive items and pay less. To detect this type of fraud, stores now use high-resolution cameras that compare the scans and weight of a product to the actual value of the product. These cameras are relatively cheap but generate huge amounts of data. By moving computing to the edge, data can be analyzed instantly. This means stores can detect fraud in real time, rather than after the "customer" has left the parking lot.
Today, a manufacturing plant can be outfitted with dozens of cameras and sensors at every step of the manufacturing process. Real-time analytics and AI-driven reasoning can reveal if an error exists in milliseconds or even microseconds. For example, maybe the camera will show that too much sugar has been added, or that there are too many ingredients. With cameras and real-time analytics, production lines can adjust to improve problems and even compute shutdowns when repairs are needed—without causing catastrophic damage.
In the healthcare sector, infrared and x-ray cameras have been changing the game as they offer high resolution and rapidly deliver it to technicians and doctors Provide images. With such high resolution, AI can now filter, evaluate and diagnose abnormalities before a doctor confirms them. By deploying AI-powered edge computing, doctors can save time because they don’t need to send data to the cloud to get a diagnosis. Therefore, when oncologists see if a patient has lung cancer, they can apply real-time AI filtering to the patient’s lung images to obtain a quick and accurate diagnosis and greatly reduce the patient’s anxiety of waiting for a reply.
Today, self-driving cars are possible because of relatively cheap and available cameras that provide 360-degree stereoscopic vision perception. Analysis also enables precise image recognition, so a computer can recognize the difference between a tumbleweed and a neighbor's cat, and decide whether to brake or maneuver around an obstacle to stay safe.
The affordability, availability, and miniaturization of high-performance GPUs and CPUs enable real-time pattern recognition and vector planning for autonomous vehicle driving intelligence. For self-driving cars to be successful, they must have enough data and processing power to make intelligent decisions and take corrective actions fast enough. Now, this is only possible with today’s edge technologies.
When extremely powerful computing is deployed at the edge, enterprises can better optimize operations without worrying about latency or losing connectivity to the cloud. Everything is now distributed at the edge, so problems are solved in real time with only sporadic connectivity.
We have come a long way since the first wave of edge technologies. Thanks to advances in edge technology, businesses are now gaining a more complete view of their operations. Today's edge technologies not only help businesses increase profits, in fact, it helps them reduce risk and improve products, services, and customer experiences.
The above is the detailed content of How edge computing helps enterprises reduce costs and increase efficiency. For more information, please follow other related articles on the PHP Chinese website!