Reports

Parents fear the “virtual friendships” of their children with chat platforms

People expressed their fear of virtual friendship that arose between their teenage children and the platforms supported by artificial intelligence, pointing out that they interact with them permanently for entertainment purposes or inquiring about different life matters, or to help them in their studies, or in order to direct them to properly behave in different situations.

They stressed the importance of setting controls for the use of such platforms so that they do not control the minds of children and their way of thinking, while educating children in the way they deal with modern technology so that they do not disclose their personal data and ideas, which enables the platforms to form a comprehensive idea of ​​their personal lives.

Specialists in the field of cybersecurity, in statements to «Emirates Today», warned of the danger of sharing sensitive personal and financial data when using chat -backed chat applications, as they are able to form a comprehensive view of a person’s life, and can be exploited for commercial and marketing purposes, which poses potential risks to privacy and safety.

People revealed to «Emirates Today» the growing use of their children for chat programs backed by artificial intelligence in their daily lives, while studying and resolving school duties, noting that they were surprised that these platforms know each small and large about their children.

They pointed out that they do not realize the size of the risks resulting from the disclosure of personal data, but it is important to increase awareness of this category, of the importance of privacy, and dealing with caution and caution with chat programs and platforms, without revealing personal data, or being a substitute for parents in guidance and guidance in daily life.

“Our daily life is witnessing a growing presence of artificial intelligence programs and platforms, starting with assistance in academic tasks, to creative writing and informal dealings.

He added that although these tools are available and easy to use, it is necessary to know how they work, especially in terms of dealing with personal data.

Soling explained what is happening to our information that we share, noting that “when users interact with artificial intelligence chat platforms, the data they enter may be stored, and in some cases it is reviewed to improve the performance of the system and its reliability.”

He stated that «although institutions such as Open AI apply private policies to protect user data, the best and best practice remains, assuming that any information shared may be kept or reference, even if they are not directly linked to specific people.

He warned of the danger of exaggeration in the participation of information, as “many users, especially the younger, reveal personal details unintentionally, such as names, educational institutions, communication information, or personal ideas during the interaction with these programs and platforms, and this type of information may be sensitive, and as soon as it shares it on the Internet, it becomes difficult to control its spread and use, which poses potential risks to privacy and safety.”

He stressed the importance of verifying information, and realizing that artificial intelligence -backed chat programs do not have a fact -real knowledge, as they generate responses based on patterns and data extracted from training groups that may include inaccurate information, old content, or inherent biases, so the user must verify important or sensitive information, using reliable and approved sources.

He stated that «the best practices for using artificial intelligence chat programs in a responsible way, including avoiding the participation of personal information, verifying the information used for educational or professional purposes, and seeing the privacy policy of the platform used.

The cybersecurity expert, Abdel Nour Sami, said that there is a set of ways that enable artificial intelligence to access the user data, the first of which is direct use and interaction such as artificial intelligence based on linguistic processing, for example, “Chat GBT”, based on the mock treatment that images are modified or produced, as happened recently, or audio therapy to produce or improve sound, all of these data are stored and processed It is used to improve the product in one way or another.

He stressed that “the user must be careful and not share his data, images and privacy, because it stores very far, and it may leak in some ways.”

He said: «It is necessary not to share any matter that we fear to spread, as for the basic information such as name and e -mail, home address, phone type and network, and the virtual address of the phone, it is obtained in any way, and it is possible to enter the settings (Chat BT), and close the data sharing option for the purpose of improving the quality of the service, but this option will hide the data only partly, and the memory can be viewed to know the preserved data for the purpose of facilitating the service, and it can The user has a question (Chat GBT) simply: What do you know about me? Tell me all the information, or what do you know about my personality? Here we conclude that the smartest software on the face of the earth does not depend on the data that we share with the explicit phrase, but our personalities and data are analyzed and our interaction to conclude other things.

Abdel Nour explained that other methods for detecting personal information include friendly sites and advertisements, where the person’s preferences and inclinations are tracked in browsing websites and other services, and data between these services and companies is cooperatively shared for profit purposes, and to improve the service and for the purpose of marketing, so the user must be keen to determine the settings of “cookies” (correlation files) always when browsing sites and services. As for the phone, we must choose the settings carefully, what do we allow it to see? the pictures? Geographical location? Communication authorities? All of these must choose.

The first director of technological consulting, at the company “FTI Conselting”, Jack Fletcher, told “Emirates Today” that artificial intelligence tools have achieved wide spread, and are used in various tasks, starting with planning for an ideal holiday, to formulating difficult work messages.

Fletcher pointed out that the accelerated use of artificial intelligence has made a fundamental change in the way we work, as many companies have succeeded in spreading internal artificial intelligence tools, which positively affected productivity and organization, but the use of artificial intelligence tools that are not approved in work environments without obtaining the approval of the IT department, exposes data and safety to several risks, may represent a real threat to comply with the institution For regulatory standards, it reveals sensitive commercial information.

He pointed out that «one of the main pillars in many data privacy laws is the principle of re -use of data and transparency, and this principle is based on the fact that personal data may not be used for a secondary purpose that has not been disclosed for the individual, when providing data or granting approval to use it. Likewise, the laws of data privacy often impose restrictions on the bodies to which personal data can be sent, while some artificial intelligence tools may store data in countries that do not have strong standards for data protection and their security, and the introduction of protected information in one of these tools may expose the institution to violating the regulations and regulations for protecting data and cybersecurity ».

He pointed out that «the use of artificial intelligence tools involves a large number of security risks. Although advanced artificial intelligence tools known for their reliability may use strong security controls, this does not prevent leakage or abuse of sensitive or commercial importance, including intellectual property information.

He continued: “What makes the complexity of this problem increases is that the ability of the responsible teams to monitor this type of data leakage, and prevent it effectively, becomes more difficult if the company’s devices are not used, and the user resorted to artificial intelligence tools that are not approved within the work environment.”

He added: «To address this challenge, many compliance initiatives focused on encouraging positive behavioral transformations by training employees on the dangers of using unauthorized artificial intelligence tools, and reminding them of the basic principles of major policies, such as acceptable use policies. As soon as possible, employees must be directed towards internal artificial intelligence tools, and encourage them to use the company’s devices for all work related tasks. ”


Related Articles

Back to top button