Home  >  Article  >  Technology peripherals  >  What is artificial intelligence? Here’s a guide to artificial intelligence

What is artificial intelligence? Here’s a guide to artificial intelligence

WBOY
WBOYforward
2023-04-08 18:01:052724browse

What is artificial intelligence? Here’s a guide to artificial intelligence

By any measure, artificial intelligence (AI) has become big business.

According to Gartner, global customers will spend $62.5 billion on artificial intelligence software by 2022. The report also noted that 48% of CIOs have already deployed some kind of artificial intelligence software or plan to deploy it in the next 12 months.

All this investment has attracted a large number of startups focusing on artificial intelligence products. CBInsights reports that AI funding reached $15.1 billion in the first quarter of 2022 alone. In the quarter before that, investors poured $17.1 billion into artificial intelligence startups. Given that data drives AI, it’s no surprise that related fields such as data analytics, machine learning, and business intelligence are all seeing rapid growth.

But what exactly is artificial intelligence? Why has it become such an important and lucrative part of the tech industry?

What is artificial intelligence?

In some ways, Artificial intelligence is the opposite of natural intelligence. If living things are born with intelligence, then artificial machines can be said to possess artificial intelligence. So from a certain perspective, any "thinking machine" has artificial intelligence.

In fact, one of the early pioneers of artificial intelligence, John McCarthy, defined artificial intelligence as "the science and engineering of making intelligent machines."

In practice, however, computer scientists use the term artificial intelligence to refer to the way machines think that humans have advanced to a very high level.

Computers are very good at computing - taking input, manipulating it, and producing a resulting output. But in the past, it hasn't been able to do other things humans are good at, like understanding and generating language, identifying objects visually, creating art or learning from past experiences.

But this is all changing.

Nowadays, many computer systems are able to communicate with humans using ordinary language. It can also recognize faces and other objects. It uses machine learning techniques, especially deep learning, to enable itself to learn from the past and predict the future.

So, how did artificial intelligence get to this point?

A brief history of artificial intelligence

Many people trace the history of artificial intelligence back to 1950, when Alan Turing published "Computing Machinery and Intelligence". Turing's article begins, "I propose to consider the question, 'Can machines think?'" and also proposes a scenario known as the Turing test. Turing proposed that a computer can be considered intelligent if one cannot tell the difference between a machine and a human.

In 1956, John McCarthy and Marvin Minsky hosted the first artificial intelligence conference, the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). The conference convinced computer scientists that artificial intelligence was an achievable goal, laying the foundation for further research in the coming decades. Early attempts at artificial intelligence technology developed robots that could play checkers and chess.

The 1960s saw the development of robots and some problem-solving programs. A notable highlight was the creation of ELIZA, a program that simulated psychotherapy and provided an early example of human-machine communication.

In the 1970s and 1980s, the development of artificial intelligence continued, but at a slower pace. In particular, significant progress has been made in the field of robotics, such as robots that can see and walk. Mercedes-Benz launches its first (extremely limited) self-driving car. However, government funding for AI research has been significantly reduced, leading to a period known as the “AI Winter.”

In the 1990s, interest in artificial intelligence surged again. The Artificial Language Internet Computer Entity (ALICE) chatbot proves that natural language processing can lead to more natural human-machine communication than ELIZA. The decade also saw a proliferation of analytical techniques, which laid the foundation for later developments in artificial intelligence, as well as the development of the first recurrent neural network architectures. This was also the decade that IBM launched its DeepBlue chess artificial intelligence, the first to defeat the current world champion.

The first decade of the 2000s saw rapid innovation in robotics technology. The first Roombas begin vacuuming carpets, and NASA launches robots to explore Mars. In China, Google is developing self-driving cars.

Since 2010, artificial intelligence technology has seen unprecedented growth. Both hardware and software have advanced to the point where object recognition, natural language processing, and voice assistants are possible. IBM's Watson wins Jeopardy. Siri, Alexa and Cortana emerged, and chatbots became a fixture in modern retail. DeepMind’s AlphaGo defeated the human Go champion. Businesses across all industries are beginning to deploy artificial intelligence tools to help analyze data and achieve greater success.

Now, artificial intelligence is really starting to evolve, beyond a few narrow and limited types, and evolve into more advanced implementations.

Types of Artificial Intelligence

Different groups of computer scientists have come up with different ways to classify types of artificial intelligence. A popular classification uses three categories:

1. Artificial intelligence in a narrow senseDoes one thing very well. Apple's Siri, IBM's Watson, and Google's AlphaGo are all examples of NarrowAI. Artificial intelligence in a narrow sense is quite common in today’s world.

2. General artificial intelligence is a theoretical form of artificial intelligence that can perform most intelligent tasks like humans. Examples from popular movies might include HAL from 2001: A Space Odyssey or J.A.R.V.I.S from Iron Man. Many researchers are currently working on developing general artificial intelligence.

3. Super artificial intelligence is still in the theoretical stage, and its intelligence far exceeds that of humans. This kind of artificial intelligence is not even close to becoming a reality.

Another popular classification uses four different categories:

1. Reactive machinesreceive input and provide output, but they do not have any memory or knowledge from the past Learn from experience. The robots you fight against in many video games are prime examples of reactive machines.

2. Machines with limited memory can look back in time. Many vehicles on the road today have advanced safety features that fall into this category. For example, if a car issues a backup warning when a vehicle or person is about to pass, it is using a limited set of historical data to draw conclusions and provide output.

3. Mind machine theory is aware of the existence of humans and other entities and has their own independent motivations. Most researchers agree that such artificial intelligence has yet to be developed, and some say it shouldn't be attempted.

4. Self-aware machines know their own existence and identity. Although some researchers claim that self-aware artificial intelligence already exists today, only a few agree. Developing self-aware artificial intelligence is highly controversial.

While these categories are interesting from a theoretical perspective, most organizations are more interested in what can be done with artificial intelligence. This brings us to the AI ​​side that generates a lot of revenue – AI use cases.

Use Cases of Artificial Intelligence

Artificial Intelligence The possible AI use cases and applications are endless. Some of the most common AI use cases today include:

Recommendation Engine – Whether it’s shopping for a new sweater, finding a movie to watch, browsing social media or trying to find love, we might You’ll come across an AI-based algorithm that will make recommendations. Most recommendation engines use machine learning models to compare a user’s characteristics and historical behavior with those around them. These models are good at identifying preferences even if the users themselves don't know those preferences.

Natural Language Processing - Natural language processing (NLP) is a broad category of artificial intelligence that includes speech to text, text to speech, keyword recognition, information extraction, translation and language generate. It allows humans and computers to interact through ordinary human language (audio or typing) rather than through programming languages. Since many NLP tools incorporate machine learning capabilities, they tend to improve over time.

Emotional Analysis - Artificial intelligence can not only understand human language, but also identify the emotions that underpin human conversations. For example, AI can analyze thousands of tech support conversations or social media interactions and identify which customers are experiencing strong positive or negative emotions. This type of analysis allows customer support teams to focus on those customers who may be at risk of defection and/or are extremely passionate supporters who may be encouraged to become brand advocates.

Voice Assistant – Many people interact with Siri, Alexa, Cortana or Google every day. While we often take these assistants for granted, they incorporate advanced artificial intelligence technologies, including natural language processing and machine learning.

Fraud Prevention - Financial services companies and retailers often use highly advanced machine learning technology to identify fraudulent transactions. It looks for patterns in financial data and issues alerts when transactions look unusual or fit known patterns of fraud to prevent or mitigate criminal activity.

Image Recognition – Many people use AI-based facial recognition to unlock their phones. This artificial intelligence also supports self-driving cars and allows many health-related scans and tests to be processed automatically.

Predictive Maintenance - Many industries such as manufacturing, oil and gas, transportation and energy rely heavily on machinery. When machinery is down, costs can be very high. Currently, companies are using a combination of object recognition and machine learning technologies to identify in advance when equipment is likely to fail so that repairs can be scheduled at a time when failure is minimized.

Predictive Analysis and Forbidden Analysis - Predictive algorithms can analyze any type of business data and use it as a basis for predicting possible future events. Prescriptive analytics, still in its infancy, goes a step further and can not only make predictions but also provide recommendations on how organizations should prepare for possible future events.

Autonomous Driving Cars - Most cars produced today have some self-driving features, such as parking assist, lane centering, and adaptive cruise. While fully autonomous cars are still expensive and relatively rare, they are already on the way, and the artificial intelligence technology that powers them is getting better and cheaper.

Robotics - Industrial robots are one of the earliest applications of artificial intelligence and remain an important part of the artificial intelligence market. Consumer robots, such as robot vacuum cleaners, bartenders and lawn mowers, are becoming increasingly common.

Of course, these are just some of the well-known use cases of artificial intelligence. Technology is permeating our daily lives in so many ways that we are often not fully aware of them.

The future of artificial intelligence

So, where is the future of artificial intelligence? Obviously, it is reshaping the consumer and business markets.

The technologies driving artificial intelligence continue to develop at a steady pace. Future advances such as quantum computing may eventually lead to major innovations, but in the near term, the technology itself seems likely to continue along a predictable path of continuous improvement.

What remains unclear is how humans will adapt to artificial intelligence. This issue will have a major impact on human life in the coming decades.

Many early AI implementations encountered significant challenges. In some cases, the data used to train the model can infect the AI ​​system with bias, rendering it unusable.

In many other cases, businesses do not see the financial results they hoped for after deploying AI. The technology may be mature, but the business processes surrounding it are not.

Alys Woodward, senior research director at Gartner, said: "The artificial intelligence software market is accelerating, but its long-term trajectory will depend on whether enterprises can improve their artificial intelligence maturity."

Woodware added Dow: "Successful AI business outcomes will depend on careful selection of use cases. Use cases that provide significant business value while being scalable to reduce risk are critical to demonstrating the impact of AI investments on business stakeholders."

Organizations are turning to methods like AIOps to help better manage AI deployments. Increasingly, they are looking to human-centered AI, using AI to augment rather than replace human workers.

In a very real sense, the future of artificial intelligence may be more about people than machines.

The above is the detailed content of What is artificial intelligence? Here’s a guide to artificial intelligence. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:51cto.com. If there is any infringement, please contact admin@php.cn delete