Internet2

close
Use Internet2 SiteID

Already have an Internet2 SiteID?
Sign in here.

Internet2 SiteID

Your organization not listed? Create a local account to use Internet2 services.

Create SiteID

Blogs

Consent: Putting the “I” in CAR

Aug 16, 2017, by Mary McKee
Tags: Frontpage News, Recent Posts, TIER, Trust & Identity, Trust and Identity in Education and Research

Note: Consent-Informed Attribute Release (CAR) is a project to develop a user consent module that integrates institutional and individual preferences for releasing attributes to relying parties. CAR is intended to help users make informed and effective decisions, and to allow the institution full flexibility in managing when a user sees a consent decision and what choices are presented. It integrates with the Shibboleth IdP but can also serve the consent needs of OIDC and OAuth. It has been developed partially under a National Strategy for Trusted Identities in Cyberspace grant from NIST. It is in the pipeline to become a TIER component. The lead development institution is Duke University, and Mary is one of several talented folks there working on the effort.

As a numbers person and an introvert, I’ve always favored passive usability research, like using automated A/B testing to measure impact on conversion rates. My work with this kind of metrics-based development at Duke led to my current role on the Consent-Informed Attribute Release (CAR) project.

This is awkward, because passive usability research is wholly insufficient for the consent space, where we seek to achieve understanding and engagement rather than measure button clicks. As anyone familiar with CAR’s full name knows, there is a silent ‘I’, and it stands for informed. It isn’t enough to demonstrate that users can navigate this system; our success must be evaluated in more qualitative terms, measuring the informedness of each interaction.

Fortunately, we have a phenomenal usability team at Duke who has spent months buying coffees for strangers in exchange for their participation in formal usability testing on CAR. From this, we’ve gathered a wealth of insights that have pushed development forward, iteration after iteration:

  • It is not, as we worried, too big an ask to expect a user to understand the distinction between a resource holder and a relying party, but the relative position of information about each is critical in the user’s perception of that relationship.
  • Form elements are very distracting, and people will gravitate to buttons and other inputs before reading anything else. We must plan for this.
  • For better and worse, the average user is primed for how to think about this problem. Reading and responding to consent intercept screens is more familiar and comfortable than we anticipated, but some seemingly innocuous words can measurably reduce understanding. For example, use of the word “share” consistently evokes comparisons to social media, increasing user concern that attributes will be published to a public forum.
  • Users want to see consent screens more than we anticipated. In the name of simplicity, I hoped user testing would support removal of the option to opt in to seeing a consent screen on subsequent visits to a site. It did not – a small but significant number of users do want to take advantage of this function.

Thanks to this testing, we are converging on a final design that consistently produces measurable understanding about what is happening with real-time attribute release. We now turn our attention to measuring a different kind of informed consent that I consider to be too often overlooked and ultimately even more impactful to institutions: the consent that administrators give to define the conditions under which information about populations they govern can be shared.

Because CAR manages both users’ policies and institutional policies about data transfer between systems, it gives us a mechanism for dealing with what I consider to be the elephant in the room of all identity management policy discussions: the intersectionality of data stewardship. For example, it is insufficient for us to address student data privacy efforts only within our student system of record, because that is not the only system of record that contains student data, and student data is student data regardless of provenance.  

My team at Duke works diligently to make sure that this kind of policy is enforced across all relevant systems, but this is increasingly untenable with the tools we have today. Through automation and innovation, we are seeing linear growth of resource holders and exponential growth of relying parties. As a result, ensuring that administrators have all the relevant information to make and review policy decisions is becoming increasingly difficult.  

I’m proud to be working on a project that is so ahead of the curve with regard to user consent, but I’m even more excited to get to work on a way to visually, intuitively, and consistently represent the terms and implications of attribute release contracts within our institutions. I can’t imagine a more important effort in this space than to create this level of transparency about attribute release, allowing administrators and technologists to focus on their respective jobs with confidence.