Canada’s security agencies urged to detail AI use

A federal advisory body is calling on Canada’s security agencies to publish detailed descriptions of their current and intended uses of artificial intelligence systems and software applications.

In a new report, the National Security Transparency Advisory Group also urges the government to look at amending legislation being considered by Parliament to ensure oversight of how federal agencies use AI.

The recommendations are among the latest measures proposed by the group, created in 2019 to increase accountability and public awareness of national security policies, programs and activities.

The government considers the group an important means of implementing a six-point federal commitment to be more transparent about national security.

Federal intelligence and security agencies responded to the group’s latest report by stressing the importance of openness, though some pointed out the nature of their work limits what they can divulge publicly.

Security agencies are already using AI for tasks ranging from translation of documents to detection of malware threats. The report foresees increased reliance on the technology to analyze large volumes of text and images, recognize patterns, and interpret trends and behaviour.

As use of AI expands across the national security community, “it is essential that the public know more about the objectives and undertakings” of national border, police and spy services, the report says.“Appropriate mechanisms must be designed and implemented to strengthen systemic and proactive openness within government, while better enabling external oversight and review.”

As the government collaborates with the private sector on national security objectives, “openness and engagement” are crucial enablers of innovation and public trust, while “secrecy breeds suspicion,” the report says.

A key challenge in explaining the inner workings of AI to public is the “opacity of algorithms and machine learning models” — the so-called “black box” that could mean that even national security agencies lose understanding of their own AI applications, the report notes.Ottawa has issued guidance on federal use of artificial intelligence, including a requirement to carry out an algorithmic impact assessment before creation of a system that assists or replaces the judgment of human decision-makers.

It has also introduced the Artificial Intelligence and Data Act, currently before Parliament, to ensure responsible design, development and rollout of AI systems.

However, the act and a new AI commissioner would not have jurisdiction over government institutions such as security agencies, prompting the advisory group to recommend Ottawa look at extending the proposed law to cover them.

Leave a Reply

Your email address will not be published. Required fields are marked *