In a blog post, Google said its Threat Analysis Group (TAG) tracks more than 270 targeted or government-backed groups from over 50 countries. However, it did not clarify which governments have targeted these users.
Google shared recent findings on government-backed phishing, threats and disinformation, as well as a new bulletin to share information about actions it has taken against accounts that it attributes to coordinated influence campaigns. “Last month, we sent 1,755 warnings to users whose accounts were targets of government-backed attackers,” it said.
A heatmap on “Distribution of targets of government-backed phishing attempts in April 2020” showed that 51-100 users in India had received such warnings. The tech giant said government-backed or state-sponsored groups have different goals in carrying out their attacks.
“…Some are looking to collect intelligence or steal intellectual property; others are targeting dissidents or activists, or attempting to engage in coordinated influence operations and disinformation campaigns,” it added.
The company emphasised that its products are designed with robust built-in security features, like Gmail protections against phishing and Safe Browsing in Chrome, but it still dedicates significant resources to developing new tools and technology to help identify, track and stop this kind of activity. “In addition to our internal investigations, we work with law enforcement, industry partners, and third parties like specialized security firms to assess and share intelligence,” it said.
Outlining steps taken by the company, Google said it swiftly removes such content from its platforms and terminates these actors’ accounts. It added that italso routinely exchanges information and shares its findings with others in the industry.
Google said that in March, it terminated three advertising accounts, one AdSense account, and 11 YouTube channels as part of its actions against a coordinated influence operation linked to India. It added that the campaign, which was sharing messages in English supportive of Qatar, was consistent with similar findings reported by Facebook.
“Since March, we’ve removed more than a thousand YouTube channels that we believe to be part of a large campaign and that were behaving in a coordinated manner. These channels were mostly uploading spammy, non-political content, but a small subset posted primarily Chinese-language political content,” it said.