AI for Homeland Security
Are your cybersecurity policies people-proof or people-first?
The Department of Homeland Security (DHS) is leveraging artificial intelligence (AI) to enhance cybersecurity while prioritizing ethical innovation. One key initiative aims to achieve compliance with the Office of Management and Budget Memorandum M-24-10 by December 2024, which mandates identifying 40 AI use cases impacting rights and safety. Five of these cases received compliance extensions, reflecting the need to balance progress with accountability.
DHS’s approach prompts a critical question: How can AI-driven security coexist with the protection of individual rights? The agency faces a dual challenge — ensuring AI is effective and equitable. DHS must prioritize foundational civil liberties as more AI advancements unfold.
Strategic Developments in AI Governance
The Cybersecurity and Infrastructure Security Agency (CISA), a division of DHS, recently published its first AI roadmap, reinforcing secure development as a cornerstone of national safety. This aligns with Executive Order 14110, under which DHS conducted a pilot to identify vulnerabilities in government software, systems, and networks. The successful pilot results were presented to the White House in July 2024.
However, the success of such initiatives also raises critical questions about scalability and adaptability. Agencies must avoid overreliance on AI for vulnerability identification while ensuring transparency.
The Government Accountability Office (GAO) has recommended improved classification of AI use cases, highlighting the need for thorough verification to uphold accountability and trust. This recommendation points to a broader issue in AI governance: ensuring transparency in the deployment and monitoring of AI systems.
A People-First Cybersecurity Perspective
DHS Secretary Alejandro Mayorkas has consistently emphasized that cybersecurity success hinges on the people behind the technology. This perspective embraces that technology is only as effective as the people who design, manage, and utilize it. A people-first approach challenges organizations to rethink traditional security models, focusing on empowering employees through education, engagement, and alignment with organizational goals.
At the Gartner Security & Risk Management Summit 2024, experts underscored the value of engaging employees with a shared purpose. When workers understand both the "how" and the "why," they are more likely to champion cybersecurity initiatives. This approach aligns with Simon Sinek's principle: “People don’t buy what you do; they buy why you do it.”
Programs like the Alert Readiness Framework (ARF) exemplify this shift, focusing on employee empowerment rather than mere compliance. But how can organizations bridge the gap between routine processes and transformative strategies? A people-first mindset may be the key.
The DHS Roadmap for AI and Addressing Threat Landscapes
DHS’s AI roadmap, published in 2024, outlines plans to test AI technologies that aim to advance public benefits and bolster homeland security while protecting individual privacy, civil rights, and civil liberties.
This roadmap includes five critical priorities:
- Using AI responsibly to support CISA’s mission
- Ensuring AI systems are secure and resilient
- Preventing malicious AI applications against critical infrastructure
- Strengthening collaboration with interagency and international partners
- Expanding AI expertise within the agency
While these goals are commendable, operationalizing them poses challenges. For instance, what mechanisms will DHS employ to ensure the resilience of AI systems? How will interagency collaboration address the unique challenges posed by cross-jurisdictional threats? DHS must develop robust mechanisms to ensure resilience and foster effective partnerships.
This roadmap, along with the White House’s directive for national security agencies to adopt AI technologies responsibly, as outlined in its first National Security Memorandum on AI, shows a commitment to not only leverage AI for national security but also to set a global benchmark for ethical AI governance. This dual focus on innovation and integrity challenges agencies to remain agile in an environment where technological and geopolitical landscapes are rapidly evolving.
Measuring Cybersecurity Awareness and Effectiveness
Effective awareness programs, guided by frameworks like NIST SP 800-55, are vital for translating knowledge into defense strategies. Leadership plays a pivotal role in embedding cybersecurity into organizational culture, ensuring awareness leads to meaningful action.
The Security Awareness Maturity Model assesses program maturity, ensuring initiatives align with strategic goals. The five stages of The Security Awareness Maturity Model include:
- Non-existent: The program does not exist
- Compliance-focused: The program is designed to meet compliance or audit requirements
- Promoting awareness and behavior change: The program identifies target groups and training topics to support the organization's mission
- Long-term sustainment and culture change: The program focuses on long-term sustainment and culture change
- Metrics framework: The program includes a metrics framework
Programs that reach Stage Three of the model, demonstrate effective human risk management by aligning awareness objectives with organizational priorities. But how can organizations accelerate their journey to higher maturity stages? The answer lies in fostering a culture of proactive security by integrating awareness programs with broader organizational goals. And fostering a culture of security awareness starts with leadership. Leaders must champion awareness initiatives, not as a compliance requirement but as a strategic priority.
Human Factors in Cybersecurity
Human behavior is the most critical component to technology success, but it also remains the weakest link in cybersecurity – 88% of data breaches are caused by human errors, according to a study by Stanford University and Tessian. Security awareness and training programs educate employees on risk management, threat intelligence, and defensive strategies, ensuring they are both informed and actively engaged in maintaining organizational security.
Training programs must resonate with diverse teams while addressing challenges like stress and burnout. Incorporating behavioral insights and social learning into design fosters compliance and engagement. Addressing these issues is not just a matter of employee well-being but also a strategic necessity, as a disengaged workforce can undermine even the most robust security measures.
For example, NIST frameworks incorporate social learning techniques to enhance cybersecurity maturity. Additionally, research in Australia has focused on human-centric cybersecurity, emphasizing behavioral integration into system design for improved compliance and user engagement.
Balancing Security and Privacy in AI Applications
AI technologies like facial recognition demand stringent safeguards to protect privacy and civil liberties. DHS has established comprehensive guidelines to ensure the responsible use of AI technologies in alignment with Policy Statement 139-06, which mandates compliance with privacy, civil rights, and civil liberties protections.
The Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure, developed collaboratively with industry and civil society, provides voluntary guidance for secure AI implementation across 16 critical infrastructure sectors. However, enforceability and efficacy remain in question. Should these guidelines become mandatory?
A Holistic Approach to Cybersecurity
DHS’s commitment to ethical AI and cybersecurity innovation sets a high standard for balancing technology and human values. By focusing on transparency, people-first strategies, and collaborative governance, the agency is shaping a future where security and trust go hand in hand.
DHS Chief Information Security Officer Hemant Baidwan stresses that AI cannot replace the need for human analysis, which ensures accuracy and decision-making integrity in cybersecurity operations. This perspective underscores a broader truth: technology is a tool, not a panacea.
DHS faces a critical juncture: How can it maintain the balance between leveraging cutting-edge technologies and preserving the human elements that drive accountability and trust? Through these concerted efforts, DHS seeks to leverage AI for improved cybersecurity outcomes while safeguarding privacy, civil liberties, and public trust.