(sdecoret/Shutterstock)

The Department of Homeland Security (DHS), which is responsible for safeguarding the U.S. from various threats, including terrorism and cyberattacks, has brought in experts from various areas of the larger AI sector to form an Artificial Intelligence Safety and Security Board.

The formation of this board is part of President Biden’s AI executive order. The primary objective of the new advisory committee is to provide recommendations on the safe development and deployment of AI technology in the nation’s critical infrastructure and other areas of concern.

The DHS has added 22 individuals to the board so far, including technology and critical AI infrastructure executives, civil rights leaders, academics, and policymakers. The board includes leaders of leading tech firms including Sam Altman, the CEO of OpenAI, Sundar Pichai, CEO of Alphabet, Satya Nadella, the chairman and CEO of Microsoft, and Dario Adomei, the CEO and co-founder of Anthropic.

“Artificial Intelligence is a transformative technology that can advance our national interests in unprecedented ways. At the same time, it presents real risks — risks that we can mitigate by adopting best practices and taking other studied, concrete actions,” said Homeland Security Secretary Alejandro Mayorkas.

The DHS has also announced the appointment of Michael Boyce as the director of the department’s new “AI Corps.” One of the major responsibilities of Boyce is to recruit 50 AI experts to DHS by the end of this year. These experts will work on various high-priority AI projects.

Last September, the DHS published the Homeland Threat Assessment 2024which highlighted the major threats to U.S. critical infrastructure, especially the transportation sector. The report revealed an increased threat of foreign actors using  AI-powered tools to gain network access, disrupt services, and gain unauthorized access to sensitive information.

The report pointed out that certain nation-states, including China, are “developing other AI technologies that could undermine U.S. cyber defenses, including generative AI programs that support malicious activity such as malware attacks.”

The U.S. has already had a cyberattack at a water treatment facility in Texas. It is suspected that Russian state-backed hackers were responsible for the attack.

This week, the DHS published new guidelines to mitigate AI risks to critical infrastructure. It also released a new report on AI misuse in the development and production of chemical, biological, radiological, and nuclear (CBRN).

“Based on CISA’s expertise as National Coordinator for critical infrastructure security and resilience, DHS’ Guidelines are the agency’s first-of-its-kind cross-sector analysis of AI-specific risks to critical infrastructure sectors and will serve as a key tool to help owners and operators mitigate AI risk,”  commented CISA Director Jen Easterly.

The DHS worked in coordination with the Cybersecurity and Infrastructure Security Agency (CISA) to develop the ‘Mitigating AI Risk: Safety and Security Guidelines for Critical Infrastructure Owners and Operators’. The guidelines address attacks using AI, attacks targeting AI systems, and failures in AI design and implementation.

To address the risks posed by AI, the DHS has outlined a four-pronged strategy – Govern, Map, Measure, and Manage. The first step is to prioritize and take ownership of safety and security outcomes. This is followed by establishing context so that the AI threats can be evaluated and understood. The third part is to develop systems to assess, analyze, and track AI risks. The strategy’s last component is to implement and maintain identified risk management controls for the AI threats.

In the 180 days since the AI executive order by President Biden, several key steps have been taken to combat the threats of AI, including the establishment of the DHS AI Corps and the release of a detailed AI roadmap for using AI technologies for the betterment of the American public and advancing the capabilities of homeland security in safeguarding Americans. It is a department-wide effort to address AI risks and opportunities, and this is exactly the sort of sustained effort needed to address the rapidly evolving AI threats.

Related Items

AI Researchers Issue Warning: Treat AI Risks as Global Priority

Cybersecurity’s Rising Significance in the World of Artificial Intelligence

AI Threat ‘Like Nuclear Weapons,’ Hinton Says



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *