top of page

Artificial Intelligence in the IC: Culture is Critical

(Editor’s Note: This article was first published by our friends at Just Security and is the second in a series that is diving into the foundational barriers to the broad integration of AI in the IC – culture, budget, acquisition, risk, and oversight.)

OPINION — Several weeks ago, I wrote an article praising the widespread, bipartisan support for the U.S. Innovation and Competition Act (USICA), which would dramatically expand federal government support for U.S. technological growth and innovation in the face of the global AI race.

In that article, I argued that for the Intelligence Community (IC) to take advantage of AI in this supportive environment, it must overcome several critical implementation challenges, and quickly. In particular, the IC must more rapidly and nimbly navigate U.S. government budget and acquisition processes, create a simple but effective risk assessment framework, and work with congressional overseers to streamline engagement and improve the partnership between Congress and the IC. Each of these areas is in dire need of radical re-imagining, without which any one of them could be the Achilles’ heel for AI in the IC. I will address each of these in my next few articles.

To successfully tackle any of these specific tasks, though, the IC must at the same time prioritize an issue much more intangible and nebulous – its own culture. Culture is the ethos of an organization – the beliefs, behaviors, values, and characteristics of a group that are learned and shared over many years. In the IC, there are several predominant cultures, all of which flow from the mission of the IC – protecting the women and men on the front lines, defending U.S. national security, and delivering insights to policymakers at the speed of decision. This mission is a powerful and unifying force that naturally leads to important IC values and behaviors.

IC Culture Today

Intelligence operations – uncovering foreign secrets and protecting assets, for example – are inherently risky; they very often put people in harm’s way. If there is a leak of information related to an operation – if the people involved, or the location or target of an operation, are exposed – not only might the mission fail to collect the desired information, but someone’s life could also be in jeopardy. The extreme consequences of leaks are well understood, thanks to notorious spies like Robert Hanssen and inside leakers like Edward Snowden. But significant damage can also flow from what seem like merely small mistakes. If someone fails to make a connection between relevant information or forgets to check a database of known terrorists, for example, the results can be just as disastrous. Thus, the IC’s high-stakes operations drive an enormous emphasis on security, preparation, and tradecraft, all of which help mitigate operational risk.

This same spirit manifests in “enabling” activities, like budget, acquisition, or human resources, through a focus on certainty of action and predictability of results. Enabling activities – by some considered a negative term but one in which I take pride as a life-long enabler – are somewhat removed from the “pointy end of the spear” but are no less critical to the ultimate success of the mission. Proper funding and resources, the right capabilities, skilled officers, legal approval, and the many other support activities are integral to successful operations.

In the field, risks are unavoidable – operators cannot choose inaction to avoid those risks. Given that risks are inherent in what they do, they must accept the reality that risks are inevitable, and they must learn to manage those risks to get the huge payoff of successful operations. So, the focus is not on risk avoidance, it is on risk management – what level of risk is acceptable for what level of intelligence gain?


The IC’s high-stakes operations drive an enormous emphasis on security, preparation, and tradecraft, all of which help mitigate operational risk.

Back home, where most enabling activities are handled, risks are not seen as inevitable – certainly not big ones. They are seen as avoidable, and subject to being minimized and mitigated. And some believe the best way to do that is by staying with tried-and-true standard operating procedures rather than experimenting with new approaches. Innovation is inherently risky. It can and will fail. Innovation is not mandatory, it is entirely avoidable. Therefore, if the tendency is to avoid risks, in most cases innovation will be avoided.

In addition to this instinct, there are compounding issues that discourage innovative change in enabling activities. First, there are practical difficulties: change is hard, messy, and requires resources that most offices can’t spare. These concerns alone are big hurdles to clear. Second, innovative change means uncertainty – in execution, accountability, and success. And that uncertainty leads to the risk that projects may fail, resulting in loss of money, reputation, or even position. Thus, control, compliance, and trust are paramount, and there is a strong aversion to things “not invented here.” Innovation is not particularly welcome in this environment and introducing new ideas can be an uphill battle, discouraging creativity in areas where it is needed most.