AI in M&A: Why Security Is the Real Differentiator
Intralinks’ benchmark research reveals why security, governance and data control are now the defining challenges of AI-enabled dealmaking.
Tags
Artificial intelligence (AI) has moved from experimentation to everyday use in mergers and acquisitions (M&A). It is now embedded across workflows, accelerating tasks and expanding deal teams’ capabilities. But as adoption accelerates, a more urgent issue has emerged: security.
That’s one of the findings from AI in M&A Dealmaking 2026: A Benchmark Study, produced by SS&C Intralinks in partnership with Reuters Events. Based on a global survey of 400 senior corporate, private equity (PE) and advisory M&A professionals, the research shows that while AI is delivering measurable value across the deal life cycle, it is also introducing new risks that many organizations are still struggling to manage. Nowhere is this more evident than in security and governance, where adoption is moving faster than control.
Security incidents are now the norm, not the exception
AI-related risk is already affecting the vast majority of deal teams. Four in five organizations (80 percent) report experiencing AI-related security incidents or near misses in the past 12 months. These range from access-control lapses to inaccurate outputs generated by hallucinations, both of which can have serious consequences in high-stakes transactions.
Access-control issues are the most common, cited by nearly half of respondents. In practice, this can mean AI tools being granted overly broad permissions or interacting with sensitive deal data in unintended ways. At the same time, 40 percent report incidents tied to hallucinated outputs, where AI-generated insights introduce inaccuracies into diligence or analysis. Advisory firms and investment banks report higher rates of both access-control lapses and hallucinated outputs.
Importantly, our research shows that these are not isolated edge cases, but systemic risks emerging alongside broader adoption. The firms using AI most aggressively are often the ones experiencing the highest levels of exposure — highlighting that as AI becomes more embedded in dealmaking, managing risk becomes just as critical as unlocking its value.
“We started by training our senior leaders to understand the art of the possible ... but also be aware of things like hallucinations, things like bias. And so our leadership team early on had a pretty good idea of what’s good, what’s not.”
-- Hari Gopalkrishnan, Chief Technology and Information Officer, Bank of America, Speaking at Reuters Events: Momentum AI Finance, November 2025
Governance frameworks exist, but gaps remain
At first glance, the industry appears well-prepared from a governance standpoint. Nearly all organizations we surveyed (94 percent) report operating under at least one formal AI policy or compliance framework, ranging from ISO/IEC standards to the NIST AI Risk Management Framework and the EU AI Act. On paper, this suggests a high level of maturity and alignment with emerging best practices.
However, the prevalence of security incidents tells a different story. The challenge is not a lack of governance, but how effectively these policies are implemented in day-to-day deal activity. In other words, while policies are widely established, they are not always consistently followed in practice — particularly as AI adoption accelerates and becomes more deeply embedded across workflows.
The findings point to a key reality: governance cannot remain static. It needs to be consistently applied across the tools and processes that support the deal life cycle. Leading organizations are already moving in this direction. As Hari Gopalkrishnan, chief technology and information officer at Bank of America, says, firms are increasingly taking structured approaches to risk, evaluating AI across multiple dimensions to ensure responsible use.
Data security is becoming non-negotiable
If there is one area of near-universal agreement among dealmakers, it is the importance of data security. Ninety-seven percent of respondents say strong data security is critical when evaluating AI solutions, with nearly two-thirds rating it as “very important.” This makes security the single most widely agreed-upon priority in the adoption of AI across M&A.
This emphasis reflects the nature of dealmaking itself. Transactions involve highly confidential financial, operational and strategic data, often shared across parties under strict controls. Introducing AI into this environment raises important questions about where that data resides, how it is accessed and how it is protected.
One of the key risks highlighted in the research is the movement of data between systems. When information is extracted from secure environments such as virtual data rooms (VDRs) and processed in external AI tools, it creates additional exposure points. As a result, security is increasingly being viewed not just as a feature, but as a foundational requirement. AI capabilities need to be built directly into secure environments, rather than layered on top.
“I think the concept of guardrails and responsible AI is a big part of what we do. We have a process by which we look at 16 different dimensions of risk.”
-- Hari Gopalkrishnan, Chief Technology and Information Officer, Bank of America, Speaking at Reuters Events: Momentum AI Finance, November 2025
From adoption to control
Despite these challenges, the response from dealmakers is not to slow AI adoption. Instead, they are strengthening their approach on how AI is governed and deployed. Sixty percent of organizations that experienced incidents report continuing to adopt AI as planned, while implementing additional safeguards and governance measures. At the same time, nearly half say they have become more cautious in how they deploy the technology.
This shift reflects a broader change in how AI is being operationalized. Many deal teams rely on multiple, disconnected tools, often requiring data to be extracted from secure environments and moved into external AI platforms for analysis — introducing risk at each step. Increasingly, the focus is shifting toward bringing AI to the data, rather than moving data to AI. A key theme that emerged from the research is that security cannot be treated as an add-on. It must be embedded directly into the architecture, data handling and access controls of every tool that interacts with confidential deal information.
Platforms like SS&C Intralinks DealCentre AI™ — designed specifically for dealmaking — combine secure data environments with embedded AI capabilities, allowing teams to analyze, share and collaborate on sensitive information without needing to duplicate or transfer it across systems. By keeping data within a controlled environment, deal teams can reduce exposure risk while maintaining full visibility into how information is accessed and used.
The path forward
AI is now a core part of modern dealmaking. Its ability to accelerate workflows and uncover insights is already reshaping how transactions are executed. But as our research makes clear, security and governance are becoming the defining factors of successful AI adoption in M&A.
This reflects a broader shift toward more connected, controlled AI ecosystems. Technologies such as Model Context Protocol (MCP) are enabling AI tools to securely connect to trusted data sources without requiring data to be copied or moved between systems. For example, solutions such as Intralinks’ DealCentre Secure AI Gateway enable AI partner ecosystems to securely interact with data, workflows and services through a shared protocol and APIs.
Ultimately, success will depend not simply on how fast organizations adopt AI, but how effectively they align innovation with control. For many firms, that means partnering with proven experts rather than trying to build and manage these capabilities internally. These providers bring a track record of delivering purpose-built AI for dealmaking, along with the bank-grade security required to protect sensitive transaction data. The firms that lead will be those that bring together advanced AI capabilities, secure infrastructure and trusted partners, turning AI from a source of risk into a source of competitive advantage.
For the full findings, read AI in M&A Dealmaking 2026: A Benchmark Study.