TECH NEWS,TRANSFORMATION & ORGANISATION
The Evolving Role of SecOps
As automation reshapes SecOps, new models are needed to secure data without limiting innovation. Insights from Fortinet experts Renaud Bidou and Stéphane Palumbo.
February 9, 2026

Stéphane PALUMBO – Senior Systems Engineer, RENAUD BIDOU – Specialized SE Director – Southern Europe& Benelux Fortinet
2025 is already emerging as a pivotal year for cloud expansion, hybrid infrastructure, and accelerated AI adoption within enterprises. These shifts are reshaping the operational reality for security teams, who must uphold robust visibility and control, across increasingly distributed environments.
As organisations modernise their architecture and embrace automation, new industry dynamics are defining how SecOps evolves, requiring new models that can protect critical data and services without slowing innovation. We spoke to Renaud Bidou, Specialized SE Director, Southern Europe & BeNeLux, and Stéphane Palumbo, Senior Systems Engineer at Fortinet to get their views.
What industry trends are shaping how SecOps teams adapt their visibility and control models?
Renaud Bidou: When we talk about trends, we should start with which kind of assets we really need to protect. Twenty years ago, security was focused on the network, or more specifically ‘communications and connections’. Attention then turned to systems, IT infrastructure, and users.
Today, the core asset is the data. SecOps teams need global visibility into infrastructure, although nowadays they are focusing more on the data. What is happening with the data, and who is getting access to it?
When data is created, stolen or altered, a key question becomes how was the data transferred or the form factor. Historically, organisations had on-prem computers and local networks, sometimes connected to another branch, then came virtualized environments.
Now, multiple computers or servers connect to the cloud. That means SecOps teams have less direct control, with data stored in locations outside their immediate scope. Even the cloud has become virtualized with containers and containerised infrastructure, where different pods operate independently. Organisations must now supervise not only their on-prem servers and networks but also their cloud, virtualized environments, and containers.
This shift defines the role of Secure Operations Centres. Twenty-five years ago, hackers targeted servers through users. Then came denial-of-service attacks, where SOCs ensured service availability. About a decade ago, ransomware emerged and changed everything, because when it hits, operations stop entirely. SOC teams now react faster to protect the entire infrastructure before attacks propagate.
“Today, the Focus is on the core assets, it is more about data. user details, phone numbers, passwords, and whatever is stored there.”
Each new attack type reshapes how Secure Operations teams work, as impacts differ even on similar data. AI fundamentally alters the amount of data that can be analysed and therefore the reaction time needed for SecOps to supervise networks or environments.
Stéphane Palumbo: From a practical perspective, and based on what I’ve seen in the field, people are increasingly focusing on what I would call the data plane sensors. Anything related to firewalls and EDR, multiple technologies that customers have been deploying to that extent. But the big shift we’ve seen, a shift perhaps accelerated by the move to the cloud, is that people are now starting to realise that they have a big data lake, but they do not know how to make the best use of it.
Users want to leverage this to achieve positive outcomes, and that’s where AI may come into play, helping people process the large amount of information in the back end. There’s a growing realisation that it’s one thing to have data at your disposal, but what you actually do with it, and how efficiently you treat it, is the key. That’s the main focus we see.
Attackers are also evolving, many now use AI and automation themselves. What changes must SOCs make to stay ahead using the same tools?
Renaud Bidou: We are trying to stay ahead, often while using the same tools! Hackers are using AI to accelerate their capability to weaponize any kind of threat or vulnerability that has been identified. They analyse the way you work and the way you write, and then exploit this. They are smart enough to evade detection, so with almost zero knowledge about their target, they can trick users into giving away personal data. For example, they impersonate someone’s email so that you believe it’s an email from the CEO or CFO requesting a wire transfer. The solution lies in training and awareness, because people need to be prepared.
Something that just came out recently is the use of malicious AI agents. Previously, malware could have simply scanned a network and then propagated, effectively a server-based attack. But now, the attacks can be far more contextualised and consequently smarter, using natural language. It will become capable of adapting itself to the context, and that’s really something that’s changing, it’s still very new.
We need to be ready to fight malicious autonomous agents. The attacker and attacking process is no longer something that can be regarded as static; they are becoming more dynamic, faster and faster, and the impact is much greater than it used to be thus we need to be able to react even faster than before.
This is where AI is part of the solution as much as the attack. Traditional Security Operations Centre response times were days or weeks, as teams needed to evaluate all the events raised by security systems to find those that were relevant. Steps to mitigate threats could take a lot of time and people. Thanks to AI, this can all be accelerated. AI-powered analysts still need humans to interpret information and make mitigation recommendations, but instead of days or weeks, this
can now happen in seconds. We don’t want AI to have complete autonomy, everyone has seen Terminator, we don’t want to see Skynet taking control, but still, it means that when a human is needed, they can act much faster.
Stéphane Palumbo: No matter the size of the SOC or the organization, leveraging AI has become essential. It’s simply too efficient to ignore. Expanding headcount endlessly only adds complexity, whereas AI helps streamline operations. The main trend I’m seeing is that teams are actively demanding generative AI tools to boost their efficiency and effectiveness in daily tasks.
Fortinet describes its SecOps platform as shifting from ‘detect and respond’ to ‘detect and disrupt.’ How would you design or tune a security operations pipeline, from alerting to remediation?
«The earlier an intrusion is detected, the more effective the mitigation steps can be. When we talk about the detect-and-respond methodology, it means we have detected that a server, laptop, or user account has already been compromised. We then shut the door to prevent propagation. You need to be quick because you know the intrusion has already started somewhere and hackers move fast too!!
Their ultimate goal may be the full compromise of the IT infrastructure. Our role is to kill the chain, to be proactive, and event to detect someone who may only be thinking of compromising a network.
In terms of pipeline first we evaluate the attack surface. What is actually exposed to the external world. We also consider if there could be internal attacker or intruder then you also need to get some contextual threat intelligence, for example are there are some kind of threats targeting your market, industry, geography or company size. All this information helps to identify very early-stage events.
We also stay ahead of hackers by leveraging deception technology. We set up fake servers, create fake system accounts, fake files or even fake industrial systems and if anyone interacts with them in any way, we immediately know it’s malicious activity, because there’s no legitimate reason to access that server or open that file. We provide prevention capabilities to the security operations center, enabling it to identify who is doing what and to quickly shut the door.”
Stephane how do you feel in terms about the way that security centres have changed?
Stéphane Palumbo: “We’ve seen quite a few real-world examples of attacks on major customers, with impacts ranging from severe to relatively minor. Take ransomware, for instance, cases where production is completely halted. In nearly all such incidents, there’s a trusted partner involved who helps rebuild the environment and restore operations. But the question always remains: what went wrong?
Most companies already have protection in place, so why did the attack succeed? In many cases, it’s due to the absence of early warning, the missed signs of a brewing attack. That’s where reconnaissance technologies make a difference. They allow organizations to monitor what’s being said about them on the dark web and other sources. It’s a relatively simple yet highly effective step that many still overlook.
Another emerging trend comes from within organizations themselves. Many customers now want to apply those same reconnaissance principles internally, studying attacker behavior and spotting threats before they materialize. Honeypots play an important role here, acting as early-warning systems embedded within the network to detect malicious activity.
But it doesn’t stop there. Companies are beginning to move beyond passive detection, using their existing security tools to take proactive action. That’s the direction we’re heading. Attacks will continue to happen (that’s unavoidable) but organizations are becoming more adaptive, learning from each incident, thinking creatively, and taking a far more disruptive approach to cybersecurity.”
Fortinet offers a broad portfolio of tools, including FortiSIEM, FortiSOAR, NGFW (FortiGate), and integrated sensors. Yet many companies still instinctively consider multi-vendor environments to be safer. What’s your advice for enabling tools to work effectively together?
Stéphane Palumbo: Our approach and belief is that a security system is only as strong as its integration. You can have multiple products in your portfolio, as we do, but if they don’t communicate efficiently and exchange information seamlessly, you don’t have a consolidated or reliable foundation for your security.
In practical terms, there used to be separate solutions, network protection such as firewalls, endpoint tools like antivirus, or email defense systems, but they often operated in silos. They might each trigger alerts, but what’s the point if you need a large back-end team just to process and correlate the data? In my experience, both in integration work and from the field, it’s common to see organizations struggling to make these systems work together.
I’ve seen many cases where companies tried to integrate best-of-breed products from different vendors. On paper, it sounded ideal, take vendor A’s logs and feed them into vendor B’s platform, but in reality, it rarely worked smoothly. Our philosophy is different. We believe that products should communicate natively through APIs, with everything carefully designed to work as one coordinated system. That doesn’t mean being locked into a single vendor. We maintain an open
ecosystem with third-party connectors, so we can integrate with other solutions, but in the right way. Essentially, our goal is to ensure that meaningful security information flows into one unified environment, allowing customers to see, understand, and act on it efficiently. Put simply, integration and collaboration are the foundations of effective cybersecurity.”
Could you walk us through your approach to incident handling, particularly when deciding between patching and implementing virtual patching?
Renaud Bidou: All systems must be patched. That’s the rule. But there may be some operational constraints or contractual engagements that prevent patching. For example, hospital equipment often runs 24/7 with very limited maintenance windows, sometimes just one hour a month. Similarly, if you work for a telco operator and have promised end users 99.99 percent uptime, that means you only have a few hours per year when maintenance can actually be done. This is where virtual patching comes into play. You’re basically buying time, buying time just before you apply the real patch.
It’s not a question of ‘shall I patch or not?’ Yes, you have to patch. But if you don’t have the time, then do virtual patching. It’s very simple.
Stéphane Palumbo: We’ve been using virtual patching for quite some time, especially in Web Application Firewalls and Web Application and API Protection, where the concept is well established.
You can’t just fix code on the fly. In many cases, that’s simply not possible as the application may no longer be supported, or it’s a legacy system where changes could break functionality entirely. This is where virtual patching really shines: it allows organizations to protect systems without altering the underlying code. Many modern tools now leverage this approach to provide immediate mitigation while permanent fixes are developed in the background.
That said, maintaining strong patching hygiene remains essential. The key shift I’m observing is that senior management is becoming far more aware of the strategic importance of patching. The old mindset of ‘if it works, don’t touch it’ no longer applies. That change in attitude is one of the most positive developments in the industry.