Leading the way in innovation for over 55 years, we build greater futures for businesses across multiple industries and 55 countries.
Our expert, committed team put our shared beliefs into action – every day. Together, we combine innovation and collective knowledge to create the extraordinary.
We share news, insights, analysis and research – tailored to your unique interests – to help you deepen your knowledge and impact.
At TCS, we believe exceptional work begins with hiring, celebrating and nurturing the best people — from all walks of life.
Get access to a catalog of the latest news stories from across TCS. Discover our press releases, reports, and company announcements.
Malicious attacks on artificial intelligence (AI) based systems are rapidly increasing and telecom infrastructure is no exception. Adversarial AI attacks have gained momentum over the last few years, given the unprecedented pace at which telecom companies are launching 5G services and using AI and machine learning (ML) to improve network performance and customer experience. As per Deepsig, 55% of the decision-makers believes AI improved network customer experience, 70% believes that AI in network planning is the best technique on switching to 5G and 64% will focus their AI efforts on network performance.
As the applications of AI and ML increase, so do the security threats on 5G networks (Figure 1). Unscrupulous entities can manipulate AI systems by:
· Bypassing the system through crafting files or including noise
· Manipulating results through tampering processed data and engaging in irrelevant or malicious dialogs with chatbots
· Extrusion through spotting, copying, and transferring sensitive data
An infographic on why telcos must prioritize mitigation efforts to minimize risks in an AI-driven 5G network. During the development phase, poisoning and backdoor attacks can compromise integrity. During the deployment phase, security risks include evasion attacks or reverse engineering, model stealing, or data extraction through model inversion or membership inference.
Possible attacks on security and corresponding mitigation approaches
Poisoning: In this type of an attack, intruders intentionally inject crafted and irrelevant data into the training repository to pollute and weaken corresponding AI functionalities. To overcome this, telcos can enhance data quality by thorough pre-processing in a controlled environment. They can also sanitize data, prevent data ingestion from unmanaged or uncontrolled data sources, and build ensemble models to restore output.
Use case: In 5G open radio access network (O-RAN) infrastructure, AI and ML are used to predict traffic and optimize radio resource management. Using an recurrent neural network–long short-term memory networks (RNN-LSTM), telcos can identify congestion by training the system to predict traffic patterns of any selected network. The model needs colossal amounts of historical 5G data for training, and it requires regular retraining for upgradation. This creates opportunities for attackers to inject irrelevant data to corrupt the model. Telcos must deploy robust management policies to block ingestion of data from unmanaged and untrustworthy sources.
Backdoor attacks: Here, attackers implant malicious functionalities in the AI model. Only the perpetrators will know how to trigger and execute the attacks by overfitting or incorporating specific rules. Telcos can prevent risks by enhancing data quality, sanitizing data and detecting and deactivating triggers. They can detect triggers by identifying statistical differences between authentic and malicious inputs. Regular retraining and data pruning can deactivate the discovered triggers.
Use case: 5G depends on multiple suppliers in its managed multi-operability environment. This can aggravate the potential impact of backdoors in the supply chain. Measuring the statistical difference between inputs and triggers and retraining the AI model at regular intervals can help reduce risks.
Evasion attacks: In this scenario, input data is tactfully modified and eventually, the AI model loses the ability to identify malicious inputs. Telcos avoid these attacks by compressing data, enabling null labeling for the negatively impacted inputs, restoring inputs and outputs by preserving the essential attributes, and saving the output in an ensemble model.
Use case: Due to the open and broadcast nature of wireless communications and the heterogeneity of IoT data in 5G, telecom networks are vulnerable to evasion attacks. By observing the spectrum, attackers try to find channel access algorithms of IoT transmitters. They build deep neural network classifiers, which predict transmission outputs. Based on these predictions, wireless attacks either jam transmissions or manipulate results during the sensing phase by transmitting irrelevant information in the test phase. Corrected inputs and outputs need to be preserved to mitigate network evasion data attacks.
Model stealing attacks: Here perpetrators craft a replica model to create auxiliary adversarial samples, which work against the live system. Or they hunt for the exact model without data hurdles, using APIs. Telcos must regularly enhance AI models to safeguard these from stealing attacks. The watermarking feature also helps to protect ownership of data. Telcos must establish procedures to control queries and detect suspicious ones.
Use case: Adversaries can counterfeit the functionality of any proprietary 5G data-based AI model like proactive maintenance, cost prediction, and so on, through black-box access like firing queries and clone the original model for financial benefit. Restricted and secured API calls and queries, regular model improvement with latest data and watermarking can help control this type of attack.
Data extraction attacks: Here attackers use leaked information, such as personally identifiable information (PII) of customers, from the input and output pairs of AI-ML models and extract the corresponding confidence levels. Telcos can overcome vulnerabilities by embedding privacy controls in the training datasets, quantitatively measuring data privacy controls at regular intervals, and controlling queries and detecting suspicious ones.
Use case: PII data is often used for building AI models. Restricting and securing API calls and implementing robust data privacy policies can minimize risks related to 5G PII data thefts.
Prioritizing mitigation efforts
While all the possible attacks can negatively impact telecom companies, organizations must prioritize risks and their related actions. Poisoning and data evasion are the most common attacks and backdoor is the toughest to detect. Model stealing and data extraction are the most fatal attacks in terms of data security. As best practices, telcos must regularly sanitize content, implement robust data quality systems, continuously validate data, and retrain models to minimize risks from security attacks.
Driving Customer Experience Enhancement in Australian Mortgage
Generative AI and its Potential in Unlocking Business Value
Testing Transformation leveraging Process Optimization, Automation and Agility
Accelerating Energy Transition with Generative AI in Utilities