AI Cyberattacks – Expert Advice for Small and Medium Businesses

Main points

  • AI-powered cyberattacks are rapidly changing tactics, including through automation and targeting, requiring businesses to adapt and implement new security measures, such as two-factor authentication and regular backups.
  • Experts emphasize the importance of legal cybersecurity, which includes having up-to-date incident response plans and responsibility for adhering to cyber protection procedures, especially in the face of greater threats.
  • 1 What is different about the recent major cyberattacks?
  • 2 AI is already changing the rules: why should we be wary of new cyberattacks?
  • 3 What can small and medium-sized businesses do to stay safe?
  • 4 Recent Cyberattacks: A Legal Perspective
  • 1 What is different about the recent major cyberattacks?
  • 2 AI is already changing the rules: why should we be wary of new cyberattacks?
  • 3 What can small and medium-sized businesses do to stay safe?
  • 4 Recent Cyberattacks: A Legal Perspective

Cyberattacks are increasingly moving into everyday life – stopping deliveries, breaking online payments, paralyzing service centers, and becoming visible to millions of people at the worst possible moment. Ukrainians learned this during the cyberattacks of the 2020s on registries and mobile communications. The key change in recent years is not only the increase in the number of incidents, but their speed and scale. New tools make an attack look like an assembly line, rather than an artificial “lone wolf operation.” Among other tools is artificial intelligence.

A telling episode occurred in France on Christmas Eve: on December 22, 2025, the telecommunications operator La Poste reported a network incident that disrupted its information systems – La Banque Postale services and the laposte.fr website were also affected, and branch operations could be temporarily disrupted. This meant not only inconvenience, but also a specific delay in logistics and online payments during the peak season.

Against this backdrop, the geopolitical dimension of cyber risks is increasingly evident: states are integrating cyber operations into military and political planning, combining classic espionage with disruptive scenarios and informational influences. Western assessments explicitly describe China's increasing ability to “pre-deploy” critical infrastructure, while Russia uses a mix of state and non-state actors, complementing cyberattacks with disinformation and influence operations.

24 Channel spoke with business and legal experts about how companies can protect themselves from the latest AI attacks.

In its article on the changes in cyberattacks, Bloomberg talks about a logic familiar to cyber experts – criminals are constantly changing tactics and quickly domesticating new technologies. Once it was cryptocurrencies and ransomware, now artificial intelligence is coming to the fore.

The most high-profile illustration of this shift is a case study published by Anthropic in November 2025. The company said it had discovered and stopped an espionage campaign where, according to its assessment, a Chinese state-backed group was attempting to use the agentic capabilities of the Claude Code tool to penetrate approximately 30 targets around the world (from large technology companies and the financial sector to the chemical industry and government agencies), and in some cases it was successful.

Anthropic called this the first documented example of a large-scale attack carried out without significant human intervention, emphasizing that it was no longer about the model's “hints” but about the semi-autonomous execution of a chain of actions.

The French situation with La Poste showed the other side of the same coin – a cyberattack may not steal data, but it can stop a service. On December 22, 2025, the company reported failures in its systems and delays in deliveries and online payments. The media described the incident as a DDoS that made online services unavailable; La Poste publicly assured that customer data was not affected.

Then comes the superstructure typical of a “hybrid” reality. On December 24, the media reported that the pro-Russian group Noname057(16) had claimed responsibility; French law enforcement was investigating the incident, and the DGSI special service was involved. At the same time, La Poste's official website reported with updates that as of December 26, the attack had been contained, services had been restored, and delivery was working normally; the company also provided indicative load figures – in particular, 5.5 million parcels delivered from Monday to December 24.

In sum, these cases add up to one trend – when tools accelerate and cheapen an attack, and political motivation adds a “plot” and goals to it, the line between a cyber incident and a social catastrophe blurs.

Ihor Puts , head of the IT Center at the Volynproekt KP, describes the change in the paradigm of cyberattacks as follows: Artificial intelligence has already changed the rules of the game.

Igor Puts

Head of the IT Center at the Volynproekt Enterprise

In the past, creating a complex virus required deep knowledge of the code. Today, attackers use language models (LLMs) to write polymorphic code that constantly changes its structure to bypass antiviruses. AI gives fraudsters incredible variability and “smartness”: attacks become personalized, not templated. The risk of mass is already a reality. Automated bots can scan thousands of systems per minute, looking for vulnerabilities that previously would have taken a person weeks to find.

At the same time, Puts emphasizes that even with the increase in hardware and software costs, the main vulnerability remains the person. The specialist calls this the “illusion of technical security”, when an expensive firewall is perceived as protection. According to Puts, 90% of successful hacks begin with a phishing email that an employee opened due to inattention. The second mistake is the lack of a “Zero Trust” culture: in many organizations, he says, if an attacker is already “inside” the network, the system begins to trust him, and the hacker can move inside the system for months.

Among the most vulnerable areas, Puts singles out the financial sector, energy and logistics – where the price of downtime is the highest. But he separately highlights the media and the state sector: in such cases, the goal of the attack is often not money, but destabilization, data manipulation and the effect in the information war. AI, he says, makes it possible to “stamp” fake news and documents in large volumes so that without careful verification they are increasingly difficult to distinguish from the real ones.

The most dangerous example of the evolution of social engineering, Puts considers deepfakes of voice and video. If previously phishing was given away by errors and garbled language, now messages can be perfectly literate and stylistically similar to letters from a specific person

Igor Puts

Head of the IT Center at the Volynproekt Enterprise

Regarding voice and video deepfakes: although mass cases of fraud with high-quality deepfakes have not yet been recorded in Ukraine, I am convinced that this will happen very quickly. Voice cloning technology is already available to everyone. Imagine a call to the chief accountant in the voice of the director with a request to urgently transfer funds. How to adapt? Companies urgently need to implement offline verification protocols. For example, a system of code words to confirm transactions or a “callback” rule (when an employee calls the manager back using a known number).

Most painfully, he adds, small and medium-sized businesses often live with the false idea: “We're small, no one cares about us.” But for automated attacks, “size” doesn't matter – they “vacuum” everyone who has vulnerabilities.

Critical steps needed now:

  • Two-factor authentication (2FA) wherever possible. This stops 99% of automated account attacks.
  • Regular backups (backups) that are stored separately from the main network. When a ransomware virus locks your data, backup is the only way to avoid paying the ransom.
  • Software updates . Most attacks exploit vulnerabilities for which patches have been released long ago, but they simply haven't been installed.

AI is changing the balance in cyberattacks very quickly, and the unpleasant part is that the speed here is not technological, but economic, says Elena Nusinova , director of Smart Corporate Service LTD, Doctor of Economics, MBA, DBA in corporate governance, in a commentary for Channel 24 .

Elena Nusinova

Director of Smart Corporate Service LTD, Doctor of Economics

AI has reduced the cost of an attack. That's the key. When the cost falls and the “output” in the form of money, access, blackmail, or destabilization remains high, the market for attacks grows by itself. Will they become massive? Yes, the risk is high. Not because hackers will “become more brilliant,” but because automation makes the flow possible. This is no longer an artificial story “for a specific victim,” this is a “thousand attempts – a dozen hits” model.

“The failures are not in technology, but in management,” says Nusinova:

  • The first is the lack of access discipline . Excessive rights, shared accounts, weak passwords, no MFA. It's banal, but it's the banal that gets people in.
  • The second is patching instead of a system . They buy solutions, install “boxes”, write policies. But they don't build a process: who is responsible, how is it monitored, what is considered critical, how quickly they react.
  • Third – backups as a myth . They are “somewhere”, but no one checks the recovery. And then it turns out that there is nothing to restore or the recovery will take weeks. In war, this is a luxury.
  • Fourth, especially in the state : formal procedures exist, but responsibility is blurred. As a result, the incident becomes “nobody's business.”

The interlocutor considers the most vulnerable industries to be those where the cost of downtime is high and there is a lot of trust in the decision-making chain:

  • finances – because money is fast, processes are massive, and a mistake costs money immediately;
  • energy and infrastructure – because systems are often old, are updated slowly, and downtime is unacceptable;
  • logistics – because any failure is multiplied throughout the supply chain;
  • medicine – because the data is sensitive, and the system cannot “lie”.

“Why is AI dangerous here? Because it provides cheap targeting: to a specific person, to a specific unit, with plausible language and context. And it does it on a large scale,” says Nusinova.

Social engineering is becoming tougher, as the basic pillar of corporate life is being destroyed – “I trust voice, face, status.”

Elena Nusinova

Director of Smart Corporate Service LTD, Doctor of Economics

The most vulnerable place is finances and access. Where “pay urgently”, “give access urgently”, “change details”. And a person does it because “the manager said so”.

What should be changed in the procedures, according to the expert? Not with “training about deepfakes,” but with rules that do not depend on emotions:

  • for payments and changes in details – confirmation in another channel (not where the request came from) is mandatory;
  • two-person rule for critical operations;
  • a strict ban on “speed orders” without verification, even if the first person calls;
  • A simple principle: voice and video are no longer evidence. Evidence is a procedure.

Small and medium-sized companies are mostly not ready for these challenges . And not because they are frivolous, the expert believes, but because they live without a safety margin – they do not have the people, time, and budget for complex designs.

But the minimum that should be, otherwise it's a game of roulette:

  • MFA for mail, banking, admins, cloud services. No discussion;
  • backups with recovery verification. Not “we backup”, but “we restored and it works”;
  • remove unnecessary admin rights, put access rights in order;
  • updating critical systems and the external perimeter – by priority, not “whenever possible”;
  • Financial control procedures: changing details and payments only with double confirmation.

Elena Nusinova

Director of Smart Corporate Service LTD, Doctor of Economics

I would sum it up like this: Cybersecurity for SMEs is not about expensive solutions. It is about a few rules that eliminate the most common entry points. If these rules are not adopted at the owner or director level, everything else is just decoration.

As the creation of new threats accelerates, so does the development of the cybersecurity sector, so I would not say that there are already signs of a significant change in the balance of power, says Andrey Alekseev , IT Director of Meest China, in a comment to Channel 24. But the further, the more dangerous the lag in the implementation of current protection tools becomes. Automated attacks have long been massive, but AI adds simplicity of implementation and efficiency to them. This is in addition to fundamentally new methods of attacks.

Andriy Alekseev

IT Director Meest China

The emergency change in the architecture of many large companies, namely the migration to cloud services abroad, in connection with a full-scale invasion, has created a large technical debt in ensuring security, including. Often, non-compliance with security requirements and recommendations, even those that were formed decades ago. The lack of a security strategy and standards, the absence or inefficiency of security assurance and control processes, the larger the company, the more complex they are and the more qualified specialists and costs they require

Because of the war, both with and without the use of AI, these are primarily government institutions, the defense industry, telecommunications, financial institutions, and the energy sector, the interlocutor believes.

Alekseev also calls social engineering one that is already among the top dangers . Multi-factor verification, employee training and notification, and monitoring of anomalous activities are mandatory.

Andriy Alekseev

IT Director Meest China

Even the largest digital giants cannot be 100% prepared for all cyberattacks, so it is a question of the probability and cost of a successful cyberattack against the company. Among small and medium-sized companies, readiness may vary significantly, but successful attacks will primarily target those who do not prioritize security and those who are most interesting to attackers. Mandatory steps are to analyze and assess the current state of cyberdefense in the company, develop a strategy to close weaknesses and establish control over the situation.

A cyberattack on a business has long ceased to be a force majeure in itself , now it is, first of all, a legally recorded case in accordance with the existing documentary base in the company, says Serhiy Dzis , partner at Syrota Dzis Melnyk & Partners, in a comment to Channel 24. “Therefore, if the case of a cyberattack reaches court or is considered by a regulator, they may assess not the fact of the attack itself, but whether the company acted within the limits of its managerial and legal duties before and during the incident. And thus, the burden of responsibility may also fall on the injured party,” the specialist says.

Sergey Dzis

partner Syrota Dzis Melnyk & Partners

Despite the increase in investments in cybersecurity, businesses and state-owned companies often remain vulnerable because they do not perceive cybersecurity as an element of good corporate governance. Interestingly, about 70% of data leaks are non-technical in nature, and, no matter how it sounds, the human factor. A separate problem is the tools of cyber security based on artificial intelligence. The estimated increase in budgets for cybersecurity by 4% in 2025 does not correspond to the real scale of threats, and attempts to compensate for the deficit of expertise through automation without human control create another risk associated with the violation of fiduciary duties of management.

In addition, many organizations lack an up-to-date and working incident response plan. As a result, evidence is lost, regulatory notification deadlines are missed, and chaotic actions only worsen the situation. “In Europe, such mistakes can result in fines regardless of the scale of the leak, for example, due to violation of the 72-hour notification deadline under the GDPR,” the expert notes.

According to Dzis, in Ukraine, cyber security activities are regulated by the Order of the State Special Communications Service No. 798 dated December 3, 2025, which establishes requirements for cyber security units and CISOs, essentially establishing new rules for how to organize security. If the unit is absent, the powers are blurred, or the responsible person is formally appointed, then in the case of a successful cyber attack on a business, especially when we are talking about a state-owned enterprise, the actions of the responsible persons may be qualified as official negligence (Article 367 of the Criminal Code of Ukraine).

Sergey Dzis

partner Syrota Dzis Melnyk & Partners

Finally, the so-called Shadow AI – the use of generative models by employees without clear policies, permissions and control. In such a situation, the company is simultaneously exposed to the leakage of trade secrets, loss of intellectual property rights and violation of personal data legislation. The critical point here is that without an approved AI policy, the employer significantly loses disciplinary and evidentiary tools, because if the ban is not legally formalized, then proving the employee’s dishonesty or substantiating management decisions after the fact will require much more effort.

No matter how we imagine cyberattacks with or without the use of AI, it all comes down to legally correct documentation that can confirm that managers and officials have done everything possible to avoid possible losses, as well as clear responsibility for violating these policies, the expert concludes.

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *