


How edge computing helps enterprises reduce costs and increase efficiency
Growing hopes for edge computing have filled the industry with bold ideas, such as “the edge will eat the cloud” and that real-time automation will become ubiquitous in healthcare, retail and manufacturing.
Today, more and more experts believe that edge computing will play a key role in the digital transformation of almost every enterprise. But progress has been slow. Traditional thinking prevents companies from taking full advantage of real-time decision-making and resource allocation. To understand how and why this happened, let’s take a look back at the first wave of edge computing and what’s happened since then.
First Wave of Edge Computing: Internet of Things (IoT)
For most industries, the concept of edge is closely related to the first wave of Internet of Things (IoT). At the time, much of the focus was on collecting data from small sensors affixed to everything and then transmitting that data to a central location — such as the cloud or a main data center.
These data streams must then be correlated with what is commonly known as sensor fusion. At the time, sensor economics, battery life, and ubiquity often resulted in data streams that were too limited and of low fidelity. Additionally, retrofitting existing equipment with sensors is often costly. While the sensors themselves are inexpensive, installation is time-consuming and requires trained personnel to perform. Finally, the expertise required to analyze data using sensor fusion is embedded in the knowledge base of employees across the organization. This has led to a slowdown in IoT adoption.
In addition, concerns about security also affect the large-scale application of the Internet of Things. The calculation is simple: Thousands of connected devices across multiple locations equate to a massive and often unknown amount of exposure. With the potential risks outweighing the unproven benefits, many believe it is prudent to take a wait-and-see approach.
Beyond IoT 1.0
It is becoming increasingly clear that the edge is not about the Internet of Things,
but about operations across distributed sites and geographies Real-time decisions. In IT and increasingly in industrial environments, we refer to these distributed data sources as the edge. We call decision-making from all of these locations outside of the data center or cloud edge computing.
Nowadays, the edge is everywhere
—where we live, where we work, and where human activities take place. Sparse sensor coverage has been addressed with newer and more flexible sensors. New assets and technologies come with a wide range of integrated sensors. Sensors are now often augmented with high-resolution/high-fidelity imaging (X-ray equipment, lidar). The reason is simple: there is not enough available bandwidth and time between the edge location and the cloud. Data at the edge is most important in the short term. Data can now be analyzed and consumed in real-time at the edge, rather than processed and analyzed later in the cloud. To achieve new levels of efficiency and superior operational feedback, computing must occur at the edge. This is not to say that the cloud is irrelevant. The cloud still plays an important role in edge computing because of the ability to deploy and manage it across all locations. For example, the cloud provides access to applications and data from other locations, as well as remote experts to manage systems, data, and applications around the world. Additionally, the cloud can be used to analyze large data sets across multiple locations, show trends over time, and generate predictive analytics models. Therefore, edge technology is about dealing with big data flows in large numbers of geographically dispersed locations. One must adopt this new understanding of the edge to truly understand what is now possible with edge computing. Today: Real-time Edge AnalyticsIt’s amazing what can be done at the edge today compared to just a few years ago. Now, data can be generated from a large number of sensors and cameras, rather than being limited to a few. This data will then be analyzed on computers thousands of times more powerful than those of 20 years ago - all at a reasonable cost. High core count CPUs and GPUs along with high-throughput networks and high-resolution cameras are now readily available, making real-time edge analytics a reality. Deploying real-time analytics at the edge (where business activity occurs) helps enterprises understand their operations and react immediately. With this knowledge, many operations can be further automated, thereby increasing productivity and reducing losses. Here are some of today’s real-time edge analytics use cases:Supermarket Fraud Prevention
Many supermarkets now use some form of self-service checkout, and unfortunately, they are also seeing an increase in fraud incidents. Some unscrupulous shoppers can substitute cheaper barcodes for more expensive items and pay less. To detect this type of fraud, stores now use high-resolution cameras that compare the scans and weight of a product to the actual value of the product. These cameras are relatively cheap but generate huge amounts of data. By moving computing to the edge, data can be analyzed instantly. This means stores can detect fraud in real time, rather than after the "customer" has left the parking lot.
Food Production Monitoring
Today, a manufacturing plant can be outfitted with dozens of cameras and sensors at every step of the manufacturing process. Real-time analytics and AI-driven reasoning can reveal if an error exists in milliseconds or even microseconds. For example, maybe the camera will show that too much sugar has been added, or that there are too many ingredients. With cameras and real-time analytics, production lines can adjust to improve problems and even compute shutdowns when repairs are needed—without causing catastrophic damage.
AI-Powered Edge Computing for Healthcare
In the healthcare sector, infrared and x-ray cameras have been changing the game as they offer high resolution and rapidly deliver it to technicians and doctors Provide images. With such high resolution, AI can now filter, evaluate and diagnose abnormalities before a doctor confirms them. By deploying AI-powered edge computing, doctors can save time because they don’t need to send data to the cloud to get a diagnosis. Therefore, when oncologists see if a patient has lung cancer, they can apply real-time AI filtering to the patient’s lung images to obtain a quick and accurate diagnosis and greatly reduce the patient’s anxiety of waiting for a reply.
Autonomous Cars Driven by Analytics
Today, self-driving cars are possible because of relatively cheap and available cameras that provide 360-degree stereoscopic vision perception. Analysis also enables precise image recognition, so a computer can recognize the difference between a tumbleweed and a neighbor's cat, and decide whether to brake or maneuver around an obstacle to stay safe.
The affordability, availability, and miniaturization of high-performance GPUs and CPUs enable real-time pattern recognition and vector planning for autonomous vehicle driving intelligence. For self-driving cars to be successful, they must have enough data and processing power to make intelligent decisions and take corrective actions fast enough. Now, this is only possible with today’s edge technologies.
Distributed Architecture in Practice
When extremely powerful computing is deployed at the edge, enterprises can better optimize operations without worrying about latency or losing connectivity to the cloud. Everything is now distributed at the edge, so problems are solved in real time with only sporadic connectivity.
We have come a long way since the first wave of edge technologies. Thanks to advances in edge technology, businesses are now gaining a more complete view of their operations. Today's edge technologies not only help businesses increase profits, in fact, it helps them reduce risk and improve products, services, and customer experiences.
The above is the detailed content of How edge computing helps enterprises reduce costs and increase efficiency. For more information, please follow other related articles on the PHP Chinese website!

Running large language models at home with ease: LM Studio User Guide In recent years, advances in software and hardware have made it possible to run large language models (LLMs) on personal computers. LM Studio is an excellent tool to make this process easy and convenient. This article will dive into how to run LLM locally using LM Studio, covering key steps, potential challenges, and the benefits of having LLM locally. Whether you are a tech enthusiast or are curious about the latest AI technologies, this guide will provide valuable insights and practical tips. Let's get started! Overview Understand the basic requirements for running LLM locally. Set up LM Studi on your computer

Guy Peri is McCormick’s Chief Information and Digital Officer. Though only seven months into his role, Peri is rapidly advancing a comprehensive transformation of the company’s digital capabilities. His career-long focus on data and analytics informs

Introduction Artificial intelligence (AI) is evolving to understand not just words, but also emotions, responding with a human touch. This sophisticated interaction is crucial in the rapidly advancing field of AI and natural language processing. Th

Introduction In today's data-centric world, leveraging advanced AI technologies is crucial for businesses seeking a competitive edge and enhanced efficiency. A range of powerful tools empowers data scientists, analysts, and developers to build, depl

This week's AI landscape exploded with groundbreaking releases from industry giants like OpenAI, Mistral AI, NVIDIA, DeepSeek, and Hugging Face. These new models promise increased power, affordability, and accessibility, fueled by advancements in tr

But the company’s Android app, which offers not only search capabilities but also acts as an AI assistant, is riddled with a host of security issues that could expose its users to data theft, account takeovers and impersonation attacks from malicious

You can look at what’s happening in conferences and at trade shows. You can ask engineers what they’re doing, or consult with a CEO. Everywhere you look, things are changing at breakneck speed. Engineers, and Non-Engineers What’s the difference be

Simulate Rocket Launches with RocketPy: A Comprehensive Guide This article guides you through simulating high-power rocket launches using RocketPy, a powerful Python library. We'll cover everything from defining rocket components to analyzing simula


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Linux new version
SublimeText3 Linux latest version

Dreamweaver Mac version
Visual web development tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment

SecLists
SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

SublimeText3 Mac version
God-level code editing software (SublimeText3)