Week of August 16, 2021
August 23, 2021
Want to receive these weekly privacy recaps in your inbox? Sign up for our privacy newsletter, A Little Privacy, Please.
The People’s Republic of China’s National People’s Congress voted on Friday to adopt the Personal Information Protection Law (PIPL), a comprehensive data protection law that will go into effect November 1. The final text of the law was released Friday afternoon (ET).
PIPL has been likened to GDPR and contains many similar aspects, including an extra-territorial reach (i.e., it applies to businesses outside the People’s Republic of China who provide products or services to, or evaluate the behavior of, natural persons in the territory) and requiring a legal basis, such as consent, for processing. Unlike GDPR, PIPL does not include a legal basis similar to “legitimate interest” but does include the possibility for “other circumstances stipulated by laws and administrative regulations”, indicating that further regulations allowing for expanded legal bases may be forthcoming.
Canada’s Office of the Privacy Privacy Commissioner of Canada (OPC) updated several guidance documents, including (among others) its Guidelines for obtaining meaningful consent, Guidelines on privacy and online behavioral advertising, and Policy Position on online behavioral advertising, to include considerations for businesses to evaluate what types of information are “sensitive” (requiring express consent under the Personal Information Protection and Electronic Documents Act (PIPEDA)).
Specifically, the guidance identifies sensitive data as including health and financial data, ethnic and racial origins, political opinions, genetic and biometric data, an individual’s sex life or sexual orientation, and religious/philosophical beliefs. The OPC said in its announcement that it would issue an Interpretation Bulletin later this year to further explain issues related to sensitive personal information.
These guidance documents have always stated that opt-out (rather than opt-in) consent could be considered reasonable providing that (among other criteria), the information collected and used is limited to non-sensitive information, but the guidance now clarifies what it means by non-sensitive information. This may affect advertisers who wish to use information such as, for example, political opinions or religious beliefs, to target advertisements to individuals in Canada.
Hong Kong’s Privacy Commissioner for Personal Data (PCPD) published “Guidance on Ethical Development and Use of AI“, setting out ethical principles that organizations should follow when developing AI, including that organizations should disclose their use of AI and relevant policies, practice effective data governance, and avoid bias and discrimination. The guidance also includes a practice guide that organizations can use in managing their AI systems.
Notably for the digital advertising industry, the guidance mentions, when discussing risk assessments, that AI systems used to serve personalized advertisements tends to carry lower risks than, say, one used to assess the credit worthiness of individuals because the former “may not have a significant impact on individuals”, but it also notes that an AI system that is fully autonomous may be riskier than one that only provides recommendations to human actors.
FTC Commissioner Rebecca Kelly Slaughter published a whitepaper on algorithmic decision-making that highlights past and potential harmful outcomes (including bias and discrimination) from algorithmic decision-making and discusses how FTC rulemaking and legislation could address such harms.
The whitepaper discusses several potential harms in the use of algorithmic decision-making in the advertising industry, including discrimination that may result from lookalike audience models (citing examples in the housing and job recruiting contexts) and the proliferation of ad targeting based on data collected from children as a result of machine-learning algorithms designed to attract, maintain, and monetize children’s attention.
We are seeing increasing focus from regulators and policy makers on the impacts of AI and algorithmic decision-making, indicating that companies may consider (if they haven’t already) adding transparent and responsible use of algorithms to their “data ethics” playbook.
A U.S. District Judge in South Carolina (in a multi-jurisdictional case) allowed California Consumer Privacy Act (CCPA) claims to proceed against software maker Blackbaud Inc. resulting from a ransomware attack. In the decision, the Judge held that (based on plaintiff assertions) Blackbaud qualifies as a “business” (rather than a service provider) under the CCPA, citing that Blackbaud uses personal data to develop, improve, and test its services, that it’s registered as a “data broker” in California, and that qualifying as a “service provider” does not insulate Blackbaud from also being a “business” under CCPA.
Although the facts of this case are unrelated to the advertising industry, it provides an interpretation that may impact whether participants in the advertising ecosystem consider themselves a “business” under CCPA, including whether they may consider themselves both a “service provider” and a “business”, requiring them to meet the obligations of both roles.
PubMatic, the US adtech company, saw a California district court judge dismiss a class-action suit brought against them by a UK citizen living in England. The plaintiff alleged that PubMatic tracked him and the other complainants across the web, in violation of the UK version of the GDPR.
The judge based her dismissal on the fact that the district court would find it burdensome to familiarize itself with the UK version of the GDPR, especially given that PubMatic was willing to contest the suit in the UK court.
Although there may be a lot of interest from U.S. law firms in establishing a precedent for hearing GDPR claims against US-based tech companies in US courts, the Northern District of California refused to provide the precedent they were looking for.
The UK Information Commissioner’s Office (ICO) approved the first certification scheme criteria under Article 42(5) of the UK General Data Protection Regulation, which allows for establishment of approved data protection certification mechanisms for demonstrating compliance with the regulation. The approved schemes include standards for data sanitization, age checking, and age appropriate design of information society services.
These schemes are the first of likely more standards that give companies an opportunity to demonstrate their heightened commitment to compliance and data ethics.
Want more of the privacy highlights that matter to adtech and martech? Sign up for our privacy newsletter, A Little Privacy, Please.
A Little Privacy, Please weekly recaps are provided for general, informational purposes only, do not constitute legal advice, and should not be relied upon for legal decision-making. Please consult an attorney to determine how legal updates may impact you or your business.
Latest Blog Posts
The U.S. Department of Justice announced a $115,054 settlement...
The consultation, which ran for 10 weeks ending in...
Privacy for America, a coalition that includes several ad...
Latest White Papers
How to review your vendor list to mitigate compliance...
Keep in touch
Sign up for our newsletter to keep up with privacy news for adtech and martech,
plus occasional company news.