Artificial intelligence (AI) and machine learning (ML) can potentially provide the homeland unprecedented opportunity to enhance its cybersecurity posture. The Science and Technology Directorate (S&T) is exploring the possibilities of using new advances in this technology to quickly process large amounts of data and deploy models to detect threats, increase resilience and provide more supply chain oversight.
When you hear about AI in the news, it sounds as if the robots of science fiction will be taking over soon. What isn’t as commonly covered is how much potential it has to make things safer from a cybersecurity perspective. The sheer volume of data it can process in a compressed period of time and then, contextualizing that data using ML, have vast implications for those attempting to make the nation more cyber secure.
The DHS Science and Technology Directorate (S&T) is exploring the many ways this technology and its newer applications can support the national security mission, in line with the DHS AI Roadmap and its “Protect AI systems from cybersecurity threats and guard against AI-enabled cyberattacks” workstream. There are cybersecurity problems that are more complex where AI can provide solutions—and potential protections—never imagined, according to Donald Coulter, S&T’s Senior Science Advisor on Cybersecurity.
S&T is working on a number of initiatives that are intended to help inform the Cybersecurity and Infrastructure Security Agency’s (CISA) AI strategy. For instance, S&T has a project underway to research advanced methods for enabling real-time management of cyber threats to critical infrastructure. Another project is increasing the resilience of software analysis tools by helping to identify and mitigate possible weaknesses in ML-based reverse engineering tools, as part of an overarching strategy to assess and mitigate risks of adversarial attacks on AI-based systems. This effort involves identifying whether certain ML algorithms may be susceptible to subversion by sophisticated adversaries, which could make it difficult to understand and mitigate attacks on S&T models. There is also work being done to help CISA launch a testbed that can provide a secure, connected multi-cloud environment to support AI development and testing. Since AI systems are software systems, ensuring that they are designed and deployed securely is an extension of CISA’s cybersecurity mission.
“CISA understands that AI is the future and intends to move further in that direction,” said S&T Program Manager Benson Macon. “We can provide them with the contract vehicle to support very specific R&D activities. We get them the experts. Contract support is critical at this time, especially when it comes to AI. There is a shortage within this field of expertise. We must bring in the experts that are more specialized and have the knowledge depth, coupled with the technology experience gained in the IT industry.”
Future-Looking AI Exploration
A large part of S&T’s current work on AI cybersecurity applications is around conducting the necessary research to help chart a future course, looking forward on how AI will evolve and may be applied. S&T funded a series of Emerging Technology and Risk Analysis reports on emerging technologies, and one—co-authored by former S&T Acting Under Secretary Dan Gerstein—looked specifically at risks and scenarios relating to AI use affecting critical infrastructure. While assessing that both challenges and opportunities exist, researchers pointed to the arrival of commercially-available generative AI in March 2023 as an interesting case study for how AI technologies—in this case, large-language models—are likely to mature and be integrated into society. This disruptive generative AI “bot” was capable of analyzing large quantities of data and generating content, performing a human-like function never before seen. According to the report, the initial rollout illustrated a cycle—development, deployment, identification of shortcomings and other areas of potential use and rapid updating of AI systems—that will likely be a feature of future AI evolutions.
S&T also partnered with the National Science Foundation (NSF) on the launch of the AI Institute for Agent-Based Cyber Threat Intelligence and Operation (ACTION). Only a year old, S&T hopes that the R&D developed will ultimately inform S&T’s AI Roadmap and help push its development programs forward into the future. “We will take the knowledge learned and preliminary technologies developed to inform our approach to operationalizing AI for cybersecurity,” Coulter said.
The ACTION Institute, a federally funded university consortium, seeks to change the way mission-critical systems are protected against sophisticated, ever-changing security threats. The ultimate goal is to design AI-based intelligent agents for security operations experts to use that will apply complex knowledge representation and logic reasoning, learning to identify flaws, detect attacks, perform attribution and respond to breaches in a timely and scalable fashion.
“We are using AI to increase the effectiveness of our cybersecurity mechanisms and researching ways we can use distributed learning and automated intelligent agents to monitor the network for anomalies. Can we apply this? How do we make our detection and mitigation techniques automated and identify indicators of compromise?” Coulter said.
Human Teaming Is the Key
The robots won’t be taking over at S&T anytime soon, as it is committed to applying a human-machine teaming approach, especially when it comes to cybersecurity, with the understanding that there are unique advantages to keeping humans in the loop that lends to greater controls and oversight of quality. The Center for Accelerating Operational Efficiency (CAOE), one of its many DHS Centers of Excellence (COE), has a project in progress on Combining Human Intelligence with Artificial Intelligence for a Usable, Adaptable [Software Bill of Materials], more commonly known as CHIAUS. Also focused on software resilience, the objective of this project is to integrate human-centered interactions with Software Bill of Materials (SBOM) data to empower developers and consumers with actionable, understandable risk information and foster greater trust in automated decision-making systems. This increased confidence will come from having more detailed information on each of the software components and chain of custody as well as human factors that could ultimately influence results.
It is S&T’s belief that human-machine teaming will ultimately enhance and increase effectiveness of cybersecurity, Coulter said, and this is undergirding S&T’s approach. While the value of human-in-the-loop is generally recognized as a risk-mitigating feature, S&T is mapping out future research to dig deeper into ways to extract maximum value from AI with minimum risks associated with human error. This future research will seek to identify the most effective human-machine teaming models for homeland security application, determine ways to increase effectiveness of teams individually and at scale and increase trust in both the AI model and the human’s competence as they apply to cybersecurity use cases.
ACTION will look at different components of AI and think through how to build them and how to shape the human interaction with an intelligent agent that is specifically focused on cybersecurity. “How do they interact with each other? How do we pull in thoughts from game theory, social behavioral analysis,” Coulter asked. “The outcome will be that we as an organization will be able to use this tech autonomously to respond to an incident and mitigate it, leading to improved resilience. AI will be used as a tool to create more secure components as part of the design and analyze systems while in operation to identify where potential weak points might be.”