Fri. Nov 14th, 2025

Omsk University Pioneers Advanced Voice Authentication for Data Protection

Researchers at Omsk State Technical University (OmSTU) have developed a neural network designed to effectively prevent personal data leaks. This innovative system authenticates users by analyzing their voice, even accounting for variations in timbre and intonation caused by emotional states. The findings have been published in the journal Applied System Innovation.

In the first quarter of 2025, Russian companies faced approximately 801 million cyberattacks, equating to over a hundred data breach attempts every second. OmSTU experts note that modern hackers target not only personal and financial client data but also sensitive medical and biometric information.

To enhance the security of such sensitive data, the university`s scientists have created a voice authentication system based on a novel neural network model. According to Pavel Lozhnikov, OmSTU`s Vice-Rector for Research and Innovation, the algorithm exhibits heightened sensitivity to external interference due to its innovative neuron types and their mathematical interconnections.

Lozhnikov elaborated, “Upon integrating our model`s voice recognition procedure, the system will accurately identify the user while preventing malicious actors from extracting the voice password template. Furthermore, it boasts superior accuracy compared to its closest counterpart: its error rate is 2.1% versus 2.7%, and the generated password in our system is 1024 bits, compared to only 160 in the alternative.”

Lozhnikov emphasized that the system was designed to account for variations in a person`s voice, whether they are speaking normally, feeling sleepy, nervous, or tired. One of the datasets used to train the new neural network included voice samples where speakers uttered password phrases not only in a normal state but also in altered emotional conditions.

The university stated, “Our scientific school, `Secure Neural Network Algorithms for Artificial Intelligence,` develops solutions that make it impossible, or at least computationally very difficult and time-consuming, for confidential data used to train AI models to be leaked or extracted. The primary issues this model addresses are the traditionally low accuracy of voice recognition and the challenge of keeping biometric templates secure from attackers.”

Looking ahead, OmSTU scientists plan to adapt this model for other biometric identifiers, such as handwriting and facial features. Experts also anticipate an increase in attacks on biometric systems using fakes and spoofing techniques, driven by the advancement of generative AI, and are conducting further research to counteract unauthorized access to such information.

By Barnaby Whitfield

Tech journalist based in Birmingham, specializing in cybersecurity and digital crime. With over 7 years investigating ransomware groups and data breaches, Barnaby has become a trusted voice on how cybercriminals exploit new technologies. His work exposes vulnerabilities in banking systems and government networks. He regularly writes about artificial intelligence's societal impact and the growing threat of deepfake technology in modern fraud schemes.

Related Post