Jul 312020
 July 31, 2020  Posted by  Non-U.S.

From the U.K. Information Commissioner’s Office, the foreword to the guidance:

The innovation, opportunities and potential value to society of AI will not need emphasising to anyone reading this guidance.

Nor is there a need to underline the range of risks involved in the use of technologies that shift processing of personal data to complex computer systems with often opaque approaches and algorithms.

This guidance helps organisations mitigate the risks specifically arising from a data protection perspective, explaining how data protection principles apply to AI projects without losing sight of the benefits such projects can deliver.

What stands out in the following pages is that the underlying data protection questions for even the most complex AI project are much the same as with any new project. Is data being used fairly, lawfully and transparently? Do people understand how their data is being used? How is data being kept secure?

The legal principle of accountability, for instance, requires organisations to account for the risks arising from their processing of personal data –whether they are running a simple register of customers’ contact details or operating a sophisticated AI system to predict future consumer demand.

Aspects of the guidance should act as an aide memoire to those running AI projects. There should be no surprises in the requirements for data protection impact assessments or of documenting decisions. The guidance offers support and methodologies on how best to approach this work.

Other aspects of the law require greater thought. Data minimisation, for instance, may seem at odds with systems that allow machine learning to conclude what information is necessary from large data sets. As the guidance sets out though, there need not be a conflict here, and there are several techniques that can ensure organisations only process the personal data needed for their purpose.

Similarly, transparency of processing, mitigating discrimination, and ensuring individual rights around potential automated decision-making can pose difficult questions. Aspects of this are complemented by our existing guidance ‘Explaining decisions made with AI guidance’, published with the Alan Turing Institute in May 2020.

The common element to these challenging areas, and perhaps the headline takeaway, is the value of considering data protection at an early stage. Mitigation of risks must come at the design stage: retrofitting compliance as an end-of-project bolt-on rarely leads to comfortable compliance or practical products. This guidance should accompany that early engagement with compliance, in a way that ultimately benefits the people whose data AI approaches rely on.

The development and use of AI within our society is growing and evolving, and it feels as though we are at the early stages of a long journey. We will continue to focus on AI developments and their implications for privacy by building on this foundational guidance, and continuing to offer tools that promote privacy by design to those developing and using AI.

I must end with an acknowledgment of the excellent work of one of the document’s authors, Professor Reuben Binns. Prof Binns joined the ICO as part of a fellowship scheme designed to deepen my office’s understanding of this complex area, as part of our strategic priority of enabling good practice in AI. His time at the ICO, and this guidance in particular, is testament to the success of that fellowship, and we wish Prof Binns the best as he continues his career as Associate Professor of Computer Science at the University of Oxford.

We will continue to develop this guidance to ensure it stays relevant.

We would like to continue to consult with those using the guidance to understand how it works in practice and ensure  it remains relevant and consistent with emerging developments.

We are also interested in what tools the ICO could create to compliment the guidance and support you to implement it in practice.

Additional contact information and a feedback form for those who would like to contribute to guidance can be found on the ICO’s site.

To access the other sections of the guidance, start here on the ICO’s site.

Jul 312020
 July 31, 2020  Posted by  Business, Govt, Surveillance, U.S.

Brooke Crothers reports:

Google now sets a time limit on data used by police for tracking suspects, the CEO said at Wednesday’s congressional hearing with tech giants.

The data is used for a so-called “geofence warrant,” which taps into a massive Google database that tracks where you go anonymously. It’s part and parcel of a trend by tech companies to track where you go, what you eat, and what you buy, among a host of other tracking information.

Read more on Fox News.

via FourthAmendment.com.

Jul 312020
 July 31, 2020  Posted by  Laws

It is so hard to get privacy protections for consumers that you might think that if a law has privacy provisions, you’d want to keep them.  Not necessarily, as Robert Gellman explains in an opinion piece that opened my eyes — and may open yours, too.

How do the privacy protections in the Gramm-Leach-Bliley Act — the well-known banking law — help consumers? The short answer is that the GLBA does almost nothing to help consumer privacy. Understanding that the GLBA is essentially a privacy fraud is important because exemptions for the GLBA are features of some state and federal privacy bills.

Let’s look at the provisions of the GLBA. The privacy part of the law provides two — and only two — provisions for consumers. First, each financial institution must have a privacy notice. That’s something but not much.

Read more on IAPP.