OTTAWA — The advisory body tasked with making recommendations for Canada’s pending online safety legislation has failed to reach agreement on how online harm should be defined and whether dangerous content should be completely removed from the Internet.
On Friday, the federal government published the results of the tenth and final meeting of the expert panel, which summarizes three months of deliberations on what a future legal and regulatory framework could look like.
The 12-strong panel brought together experts on issues such as hate speech, terrorism, child sexual exploitation and regulation of online platforms. Their conclusions come after Ottawa released a proposal for an online claims law last summer, prompting some stakeholders involved in consultations to urge the government to return to the drawing board.
The findings highlight the major challenges the federal government will face in introducing the law, which was due to be introduced within 100 days of the Liberals forming a government last fall.
Heritage Minister Pablo Rodriguez is now beginning a series of regional and virtual roundtables to gather more feedback on the framework, starting with the Atlantic provinces.
Here’s what the experts – who remain anonymous in the report – concluded.
What is “online damage” anyway?
In its proposal last year, the government identified five types of “harmful content”: hate speech, terrorist content, incitement to violence, sexual exploitation of children and non-consensual intimate images.
Most members of the panel noted that child exploitation and terrorist content “should be clearly addressed by future legislation”. Others found the five categories “deeply problematic”, in one case criticizing definitions of terrorism by focusing on “Islamic terror” and omitting other forms.
Rather than isolating specific types of malicious content, some experts suggested that “harm could be defined more broadly, e.g. B. Harm to a specific segment of the population, such as children, the elderly or minority groups”. Panel members also disagreed on whether harm should be narrowly defined in legislation, with some arguing that dangerous content is evolving and changing, while others said regulators and law enforcement agencies would require strict definitions.
Disinformation, something Rodriguez has previously said needs to be addressed with “urgency,” also took up an entire session of the panel’s review. While last year’s government proposal did not list intentionally misleading content as a category, last summer’s consultations highlighted disinformation as a possible classification of online harm.
The panel concluded that disinformation “is difficult to capture and define”, but agreed that it leads to serious consequences such as inciting hatred and undermining democracy. Finally, members argued that disinformation should not be defined in any legislation because it would “enable the government to distinguish between truth and falsehood – which it simply cannot do”.
Should harmful content be deleted from the Internet?
Another key area the experts couldn’t agree on was whether upcoming legislation should force platforms to remove certain content.
The debate stems from long-standing problems with the government’s previous proposal to remove harmful content within 24 hours of being flagged and concerns about interference with freedom of expression.
While experts appeared to agree that explicit calls for violence and child sexual exploitation should be removed, some warned against removing content, while others “expressed a preference for over-removal of content rather than under-removal “.
Experts were divided on what thresholds would apply to content removal, with some suggesting that the harm could be classified in two ways: either as a “serious and criminal” category with the possibility of a remedy, or as a less serious category without the Opportunity to appeal.
There was also disagreement as to whether private communications, such as content sent via chat rooms, Facebook Messenger, or Twitter and Instagram direct messages, should be regulated and removed. Some members said private services that harm children should be regulated, while others said tapping into private chats was “difficult to justify from a charter perspective”.
What might happen after content is reported?
Canadian lawmakers not only have to address what constitutes online harm and what to do with it, but also what happens to victims – and those found to have posted harmful content – after messages are tagged became.
It is not yet known which body would be responsible for overseeing Ottawa’s online security framework, although the appointment of a specialist officer – such as Australia’s eSafety officer – has been raised as an option.
Experts agreed that platforms should have a review and appeals process for all moderation decisions, with some suggesting setting up an “internal ombudsman” to assist victims.
It was pointed out that such a role would need to be completely independent of governments, potential commissioners, online platforms and law enforcement agencies.
“Some suggested that the regime could start with an ombudsperson as a hub for victim support and evolve into a body that later decides disputes,” the report said.
However, experts were divided on how an ombudsman would work, with some citing that users need an external “place” to raise concerns due to distrust of social media platforms.
Others “emphasized that creating an independent body to make decisions on shutdowns would be a massive undertaking, tantamount to creating a whole new quasi-judicial system with major constitutional issues surrounding federalism and charter matters.”
Experts also expressed concern that recourse channels simply might not be practical given the amount of content, complaints and appeals the legislation could generate.
Ultimately, they came to the conclusion that “the idea must not simply be abandoned, it must be further developed and tested”.