While confirmed instances of data breaches through neural networks remain a subject of ongoing debate and have not yet received conclusive verification, it is nevertheless advisable to exercise caution and avoid unnecessary risks. This was the sentiment expressed by Sergey Kuzmenko, Head of the Digital Expertise Center at Roskachestvo.
Kuzmenko issued a warning, stating that users who upload confidential information to ChatGPT and similar AI models are inadvertently creating a potential pathway for its unauthorized use in the future. He underscored the relentless pace of technological advancement, noting that what provides robust protection today could very well become vulnerable tomorrow.
“It is absolutely imperative not to upload any data that could potentially compromise you or other individuals,” he asserted. “This category encompasses passport details, network identifiers, payment card information, sensitive medical records, login credentials and passwords for various services, and indeed any other information that could serve to identify a specific person and be exploited to their detriment.”
Kuzmenko further recommended that the safest approach to interacting with neural networks involves the use of anonymized data sets. He clarified that this means all information capable of pinpointing a particular individual must be thoroughly removed before any data is uploaded. The expert concluded by advising that if there is any chance information could be used against someone, it is far better to refrain from posing the question to the AI and instead resolve the task independently.
In a distinct development that highlights the broader capabilities of AI, ChatGPT recently provided assistance to a woman in Wales, leading to the identification of a potentially serious health condition. The AI alerted the user that a mole located on her palm could be indicative of oncology.

