Home >Technology peripherals >AI >ChatGPT Past and Future: The Evolution of Artificial Intelligence and Data Privacy in Digital Communications
Translator | Liu Tao
Reviewer| Chonglou
The development of artificial intelligence in the past few years has brought us not only opportunities, but also frustration. Some major breakthroughs have revolutionized the Internet, and many are for the better.
However, before people had time to prepare for the release of OpenAI’s ChatGPT, it had already swept the entire world. It creates an ability to naturally talk to humans and give insightful answers in a very short time, which is unprecedented.
As more public attention began to pay attention to what ChatGPT could do, every visionary leader in the world realized that from that point on, digital communication technology would usher in revolutionary changes.
But innovation often comes with controversy, and in this case, the Supernova chatbot had to deal with the privacy of legitimate data.
The development of ChatGPT required extensive data collection, and thought leaders and government privacy watchdogs have raised concerns about data privacy practices due to OpenAI's inability to accurately describe how the chatbot works, processes and stores data. There are also more and more doubts.
The issue has not gone unnoticed by the public. According to a 2023 survey, 67% of global consumers believe they are losing control of their data from technology companies.
The same survey also showed that 72.6% of iOS apps track private user data, and free apps are 4 times more likely to track user data than paid apps.
If you are concerned about this, remember that most users of ChatGPT still use the free version.
In view of this, data privacy companies need to make full use of the results generated by ChatGPT, provide products that enhance data privacy, and create a cultural atmosphere with greater data transparency and greater responsibility. This will allow people to be aware of their data rights and how they are used, while also keeping these groundbreaking AI technologies from relying on unethical tactics to make money, as is the case with many big tech companies.
ChatGPT is a Large Language Model (LLM), which means it requires a lot of data to work properly, making it Ability to predict and process information coherently.
That is to say, if you have ever written articles on the Internet, it is very likely that ChatGPT has scanned and processed this information.
Additionally, large language models (LLMs) like ChatGPT rely heavily on large amounts of data from online sources (such as e-books, articles, and social media posts, etc.) to train their algorithms. This enables users to use it to generate authentic responses that appear to be identical to human-written text messages.
In short, any article that has been published to the web can be used to train ChatGPT or its competitors' large language models (LLM), which will definitely be used in ChatGPT. Follow up after success.
Concerns over data privacy issues are unsurprising, as OpenAI recently admitted that data leaks were caused by vulnerabilities in open source libraries. Additionally, a cybersecurity company discovered that a recently added component was vulnerable to an actively exploited vulnerability.
OpenAI conducted an investigation and discovered that the leaked data included the titles of active users’ chat history and the first message of newly created conversations.
The vulnerability also exposed the payment information of 1.2% of ChatGPT Plus users, including their first and last name, email address, payment address, payment card expiration date, and payment card number. Last four digits.
To say this is a data protection disaster is an understatement. There is probably more information inside ChatGPT than any product on the planet, and sensitive information is already being leaked just months after its release.
The silver lining is this: bringing public attention to the real risks ChatGPT poses to privacy can provide an excellent opportunity for individuals to begin to understand the importance of data protection and gain a deeper understanding more details. This is especially important given the rapid expansion of ChatGPT’s user base.
In addition to implementing precautionary measures and remaining vigilant, users are also required to exercise their data subject rights (DSR), which include retaining their rights to access, edit and delete personal data.
In the digital age, every user must become an advocate for stronger data privacy regulations so that they can better control their personal information and ensure that it is used with the utmost responsibility.
ChatGPT appears to have responded to this, as new sessions will now prompt people not to enter sensitive data or company secrets as they are not secure once inside the system.
Samsung has found that it is still difficult to do this, and more people need to pay attention and exercise caution when using ChatGPT prompts.
Things like using a new ChatGPT plugin to shop may seem harmless, but do you really want an insecure digital record of what you eat on the internet? All the things that have passed?
Until these privacy concerns are resolved, we as a public need to slow down and not get too caught up in the frenzy over new AI technologies.
It goes without saying: When users commit to terminating transactions, companies must take responsibility for inappropriate data use and protection practices.
Therefore, companies large and small should promote transparent and easy-to-understand protocols so that individuals clearly understand how their data is used and where it goes, as well as anyone who may have access to this data. third-party entities.
In addition, business leaders should provide users with clear ways to exercise their data subject rights (DSR) and educate employees to adhere to ethical guidelines for data processing and storage.
We are still far from that goal, as most default permissions remain in a regulatory gray area, given that they do not clearly indicate the need to opt out or opt in, depending on the user and the company’s location.
Transparency, clarity and accountability should be at the forefront of every organization’s considerations regarding data privacy.
The rise of ChatGPT has ushered in a new era of data privacy vigilance, in which organizations and individuals need to be equally proactive in ensuring data is handled appropriately to avoid breaches and misuse.
ChatGPT is collecting more data at a faster rate than any other company in history, and if security goes wrong, the impact on personal data privacy will be unparalleled.
If companies want to ensure they are truly aware of potential issues, they must start protecting data more strategically and build consumer trust in the internet. Otherwise, a better shared digital future is in grave danger.
Original link: https://hackernoon.com/the-evolution-of-ai-and-data-privacy-how-chatgpt-is-shaping-the-future-of -digital-communication
Liu Tao, 51CTO community editor, is the person in charge of online detection and control of the system of a large central enterprise.
The above is the detailed content of ChatGPT Past and Future: The Evolution of Artificial Intelligence and Data Privacy in Digital Communications. For more information, please follow other related articles on the PHP Chinese website!