How AI adoption is revolutionizing cybersecurity

How AI adoption is revolutionizing cybersecurity

On the brink of an unparalleled transformation that is reshaping the cybersecurity sector and altering how organizations navigate the digital realm, writes Morné Louwrens, Managing Director of Advisory at Marsh Africa, and Prejlin Naidoo, Digital Partner at Oliver Wyman.

According to a report from the Oliver Wyman Forum, last year witnessed increased adoption of artificial intelligence (AI) systems and frequent utilization of generative AI tools among businesses.

Nonetheless, employees might utilize generative AI instruments such as ChatGPT to create presentation slideshows based on internally shared documents, potentially exposing confidential corporate information inadvertently. Given the swift advancements of these AI technologies being more deeply incorporated into organizational procedures come 2025, heightened dangers will arise.

Criminals operating in cyberspace have realized the power of artificial intelligence to enhance the effectiveness and stealth of their operations.

In addition, the World Economic Forum’s 2025 Global Risks Report identified that generative AI represents a "significant" threat of misinformation and disinformation due to its potential for influencing public opinion through manipulation.

Security experts can best fight against AI-facilitated cybercrimes by using AI tools for identifying threats and responding to them. Nonetheless, this might amplify the ongoing competition between those who attack and those who defend systems. Additionally, AI provides an edge in terms of workforce capabilities since it helps bridge skill shortages, particularly when cybersecurity specialists are scarce in the job market.

Approximately 70% of successful cyberattacks can be attributed to human errors, highlighting individuals as the weakest link in cybersecurity. Heading towards 2025, companies should focus on implementing robust AI governance frameworks and clear usage policies to better address this constant risk.

Fuelled by advances in artificial intelligence, digital theft has become a rapidly expanding international industry. According to Cybersecurity Ventures, hackers globally garnered approximately $9.5 trillion through cybercrimes in 2024, with projections indicating an increase of about $1 trillion in 2025. If considered a nation, cybercrime would be ranked as the planet’s third-largest economy, coming right after the United States and China.

Across Africa, billions of dollars are squandered each year, with Interpol reporting estimated losses surpassing $4 billion throughout the region. This underscores the critical necessity for enhanced security protocols, a sentiment reinforced by a Marsh study on emerging cybersecurity risks.

Ransomware is becoming increasingly prevalent across the continent as cybercriminals see Africa as an ideal location to experiment with new methods, owing to the belief that the region lacks adequate preparedness. According to Interpol, previous ransomware assaults in Africa have primarily targeted essential infrastructure such as medical facilities and governmental operations—posing risks to both public security and financial steadiness.

In 2025, geopolitical tensions and mistrust, exacerbated by false information and disinformation spread through artificial intelligence, will lead to an increase in specific cyber-attack incidents.

According to the 2025 Global Risk Report, misinformation and disinformation amplify uncertainties and mistrust. Education and awareness are vital for African countries since they remain vulnerable to these challenges, which can compromise their national security.

Cybercriminals will employ AI to streamline various phases of ransomware assaults, enabling them to carry out extensive operations that simultaneously target numerous organizations.

Machine learning algorithms have the potential to allow ransomware to adapt in real-time by altering their behavior according to data collected from the compromised system. Conventional security methods may struggle to identify and counteract such threats effectively.

AI-driven phishing assaults are growing more advanced and customized, utilizing social media and communication habits to craft believable messages. This makes it progressively challenging to identify harmful emails, even for those who are thoroughly trained, since AI minimizes common giveaways such as typographical errors and grammatical discrepancies.

As artificial intelligence makes programming more accessible, hackers may also find it simpler to develop harmful software that could be embedded within links in phishing emails. Malicious programs such as DeepLocker might employ AI techniques to conceal their actual intent and avoid being detected by staying inactive until triggered, thereby sidestepping conventional protective barriers. Such AI-driven malware has the potential to inflict significant harm on corporate networks through enhanced secrecy and accuracy during assaults.

In 2024, a financial employee at a prominent Hong Kong company fell victim to a scam and transferred $25 million to criminals. The incident occurred when they were deceived into joining a video conference with individuals whom they believed to be their colleagues, thus approving the transfer.

Actually, during a meeting, he encountered what he thought were his coworkers but were in factAI-generated imitations. The assault was carried out through deepfakes—synthetic content produced utilizing artificial intelligence and advanced machine-learning techniques.

Deepfake technologies can readily alter content and individuals.

The U.S. Department of Homeland Security cautions that deepfakes pose a significant risk because individuals tend to trust visual information, and synthetic media is likely to be employed more frequently for deceptive purposes.

The improper use of deepfake technology has led to calls for better tools to detect AI-altered media.

Industries such as financial services need to take proactive steps to enhance their cybersecurity frameworks, focus on implementing sophisticated detection systems, and promote user education in order to address the more intricate cyber threats expected in 2025.

Companies are leveraging AI to strengthen their security measures. AI shines in handling and examining large volumes of information, which enhances efficiency in activities such as identifying patterns and analyzing behavior. Consequently, AI plays an essential role in combating the misuse of artificial intelligence.

Financial services firms, including banks, are already using AI for security measures like facial recognition for user authentication.

Various companies employ artificial intelligence to assist programmers in resolving vulnerabilities and to improve risk assessments through the analysis of data gathered from multiple investigations.

Financial service firms actively engage in educating their clients about the newest fraud tendencies through informative messages. This is essential because individuals often represent the weakest point in the cybersecurity framework.

By 2025, successfully maneuvering through the changing cybersecurity environment will require more than merely sophisticated instruments.

Genuine toughness emerges from robust teamwork between people and artificial intelligence, with tech enhancing protection and human intuition steering secure procedures.

By adopting this comprehensive strategy, companies can reduce potential threats and establish a robust base that will empower them to flourish despite upcoming obstacles.

Provided by SyndiGate Media Inc. Syndigate.info ).