What Happens If Malware Is Installed On An AI Neural Network, Here's The Explanation
JAKARTA - A trio of Cornell University academics discovered that malware code may be hidden in an AI (Artificial Intelligence) neural network. On the arXiv preprint server, Zhi Wang, Chaoge Liu, and Xiang Cui have published a paper outlining their experience with entering code into neural networks.
Criminals attempt to get into devices running new technologies for their purposes, such as deleting or encrypting data. Then they demand payment from the customer for the recovery, will become more complicated as computer technology in the future becomes more complex. The researchers discovered a new technique to infect certain types of computer systems running artificial intelligence applications in their new study.
AI systems function by processing data in the same way that the human brain does. However, the research team found that the network is vulnerable to foreign code intrusion.
Foreign actors can infiltrate neural networks based on their nature. All such an agent has to do is mimic a network structure, similar to how memories are added to the human brain.
The researchers were able to do this by embedding the malware into the neural network that powers the AI system dubbed AlexNet, although the virus was huge, consuming 36.9 MiB of RAM on the AI system's hardware. The researchers chose what they thought would be the optimal layer for injection to inject code into the neural network.
They also added it to the previously taught model, although they warned that hackers could choose to target untrained networks because it would have less impact on the entire network.
According to the researchers, not only did ordinary antivirus software fail to detect malware, but the functionality of the AI system remained virtually unchanged after infection. As a result, if done clandestinely, the infection may go unnoticed.
The researchers pointed out that simply inserting malware into a neural network would be harmless. Anyone who slips code into the system still needs to figure out how to get it to work. They also point out that now that hackers can insert code into AI neural networks, antivirus software can be upgraded to detect them.