Eighteen months ago, board questions about AI in R&D were largely aspirational: what are you exploring, what’s the potential, what are peers doing? The questions were oriented toward future possibility and carried limited accountability for current performance.
That posture has shifted. Boards that approved AI investment in innovation functions in 2023 and 2024 are now asking accountability questions: what has the investment produced, how do you know the AI is working correctly, what governance is in place, and what are the risks you’re managing? Innovation leaders who prepared for the aspirational version of the board conversation are finding themselves in a different one.
This post covers the six questions boards are now asking about AI in R&D, what good answers look like, and what the questions signal about the level of AI governance documentation your organization should have in place before the next board meeting.
Question 1: What Is the AI Actually Doing in Our Innovation Process?
This is the foundational question, and it’s asked more frequently than innovation leaders expect because board members often have significantly different mental models of what “AI in R&D” means. Some board members imagine AI designing products autonomously. Others imagine a slightly smarter search function. The gap between what the AI is actually doing and what board members assume it’s doing is large enough that innovation leaders who don’t address it explicitly will face follow-on questions premised on incorrect assumptions.
A good answer is specific and functional: AI is performing competitive landscape analysis and patent monitoring continuously across our active portfolio, generating structured gate review packages from project data, scoring submissions against strategic criteria at Gate 0 and Gate 1, and flagging regulatory developments relevant to active projects. Each of these functions has a named capability, a defined scope, and a clear relationship to a human decision that the AI informs but does not make.
The answer that creates board confidence is specific enough to be auditable. “AI is helping us be more efficient” is not an answer that satisfies a board that has approved material AI investment. The specificity of the answer signals that the innovation leader understands what they’ve deployed, not just that they’ve deployed something.
Question 2: Who Is Accountable When the AI Is Wrong?
This question reflects a legitimate governance concern that boards are applying with increasing rigor as AI systems move from experimental tools to operational infrastructure. If an AI-generated risk assessment influences a gate decision that turns out to be wrong, who is accountable for that decision?
The answer that works—for boards and for the organization—is that the gate committee is accountable for the decision, not the AI. AI-generated analysis is a decision input, not a decision. Gate committees that use AI analysis without reviewing it, challenging it, or applying their own judgment to it are not operating the governance framework correctly. The AI governance documentation should be explicit about what AI outputs require human review before use in formal decision contexts, what that review consists of, and how it is documented.
The follow-on board question is often: how do you ensure that human review is actually happening and not becoming a rubber stamp? The answer requires reference to specific governance practices—how AI-generated outputs are labeled, how reviewer confirmation is documented in the gate record, how discrepancies between AI analysis and committee judgment are recorded and why. Boards that ask this question are probing for whether governance is real or performative, and the answer needs to reflect the former.
Question 3: What Data Is the AI Using, and Who Controls It?
Data governance is where board concern about AI overlaps most directly with existing concerns about IP protection and competitive risk. The question “what data is the AI using” is really asking two things: is our sensitive innovation data being exposed to external AI systems, and are we confident in the quality and appropriateness of the data the AI is drawing on to generate its analysis?
The architecture answer that satisfies board data governance concerns is that innovation data remains within the organization’s own Microsoft 365 environment—it is not sent to external AI services for processing, it is not stored in vendor-managed infrastructure outside the organization’s control, and it is governed by the same access controls, audit logging, and compliance framework that governs all organizational data. The AI governance framework for the innovation function should be documentable in the same terms as the organization’s broader data governance framework, because they are the same framework.
The data quality answer requires honest engagement with what the AI’s outputs are based on and where the data limitations affect reliability. A board that receives a confident answer about AI capabilities without acknowledgment of data quality conditions will eventually receive an outcome that contradicts those capabilities and will draw the conclusion that leadership was either uninformed or misleading. Acknowledging conditions—“our pattern-based risk analysis is most reliable for projects in domains where we have two or more years of historical data, and we flag that caveat to gate committees”—builds more durable board confidence than unqualified capability claims.
Question 4: How Is This Affecting Our R&D Headcount and Capabilities?
This question has two versions that require different answers. The first version is aspirational: is AI creating efficiency that allows us to do more with the same resources? The second version is operational: are we reducing headcount in R&D because of AI, and if so, how are we managing that?
The honest answer for most specialty chemicals and materials organizations deploying AI in innovation management is that AI is expanding capacity rather than reducing headcount—the same team can manage more projects at higher analytical quality because AI handles work that previously constrained their bandwidth. This is a more defensible board answer than headcount reduction because it frames AI as a competitive capability rather than a cost reduction measure, which is the more accurate characterization of what AI-assisted innovation management actually produces in the near term.
The answer should also address the R&D team’s concerns about AI and job security. Boards increasingly ask whether AI deployment is creating internal resistance that will undermine adoption, and the answer should reflect that leadership has communicated clearly about AI’s role as an analytical assistant rather than a headcount replacement—and that platform design reinforces this positioning by keeping human judgment at the center of every gate decision.
Question 5: What Outcomes Has the AI Investment Produced?
This is the accountability question that separates organizations with mature AI deployment from those still in the early stages. The answer requires specific outcome metrics rather than capability descriptions: gate review preparation time has decreased from X days to Y hours, early-stage termination rates have improved by Z percent, competitive intelligence is now current at every gate review rather than three to six months stale.
Innovation leaders who cannot answer this question with data should be preparing to answer it for the next board meeting, which means establishing the measurement baseline now if it isn’t already in place. The metrics that matter to boards are business outcomes, not technology metrics: R&D productivity, portfolio yield, decision quality, and resource efficiency. Technology metrics—number of AI queries, user adoption rates, features deployed—are not board-level answers to board-level investment accountability questions.
If the investment is recent enough that outcome metrics are not yet available, the answer should commit to specific metrics that will be available by a defined date and explain why the measurement timeline reflects the nature of the outcomes being measured—portfolio yield metrics require project lifecycle data that accumulates over twelve to twenty-four months—rather than creating the impression that measurement is being deferred indefinitely.
Question 6: What AI Risks Are You Managing?
AI risk questions from boards are becoming more specific as AI governance frameworks mature at the regulatory and organizational level. The generic answer—“we have AI governance in place”—no longer satisfies boards that have been briefed on AI risk frameworks by their legal counsel or audit committee.
The specific risks that boards of specialty chemicals companies most commonly ask about are: IP exposure risk (is our innovation data being used to train external AI models), reliability risk (how do we know when AI analysis is wrong before we act on it), regulatory risk (what is our exposure if AI-assisted decisions are reviewed by regulators), and dependency risk (what happens to our innovation process if the AI system becomes unavailable).
Each of these has a documentable answer that reflects specific governance design choices. IP exposure risk is addressed by native M365 architecture that keeps innovation data within the organization’s tenant. Reliability risk is addressed by human review requirements and audit documentation of AI outputs and human decisions. Regulatory risk is addressed by the same audit trail that governs all innovation decisions, extended to cover AI-assisted analysis explicitly. Dependency risk is addressed by ensuring that the underlying innovation data remains in the organization’s own environment regardless of the AI application layer.
Innovation leaders who can answer all six questions with specificity, supported by documented governance practices and current outcome metrics, are in the position boards expect from leaders who have deployed material AI investment. Those who are still preparing answers to these questions now have a clear agenda for what governance documentation and measurement infrastructure needs to be in place before the next board conversation about AI in R&D.

