The Legal Innovation & Technology Lab's Spot API Beta
@ Suffolk Law School  -  Spotter Updated: 2020-12-14 (Build 6)

What is Spot?

A dog sled team
Take Spot for a test drive.

Spot is an issue spotter. Give Spot a non-lawyer's description of a situation, and it returns a list of likely issues from the National Subject Matter Index (NSMI), Version 2. The NSMI provides the legal aid community with a standard nomenclature for talking about client needs. It includes issues like eviction, foreclosure, bankruptcy, and child support. Spot is provided as a service over an API. Mostly, this means it's built for use by computer programs, not people. Coders can build things (like websites) on top of the API. The hope is that by automating part of issue identification, developers will use Spot to help people in need of legal assistance better access available resources. See Pew Grant Will Take ‘Learned Hands’ Project from Prototype to Production, to Help ID Consumers’ Legal Issues.

If you are interested in taking Spot for a test drive, you may do so here. Additionally, there you will find links to other sites and tools making use of Spot.

Who is behind Spot, and who can use it?

Spot is funded by foundational support and run by Suffolk University Law School's Legal Innovation and Technology (LIT) Lab. We are a non-profit, and Spot's primary aim is providing AI-powered issue spotting to non-profit and government organizations at no cost.

The API is in the very early stages of development and is mostly an empty shell at this time. We do, however, have documentation. Over time, the number of issues addressed and the spotter's performance will improve as we grow the size of the training data and tweak things under the hood. Feel free to sign up for a developer account, and kick the tires. When signing up, you will be presented with our full terms of service.

What are the costs and benefits of letting Spot remember user data?

If Spot is given permission to remember a text, it may be read by people on our team. We do not sell this data to third parties and only share it with a closed group working on quality control and labeling. If the text is labeled it may be used to help improve Spot's performance by acting as an example for training of our algorithms. In this way, sharing a text can help others with similar issues by making it easier for Spot to identify issues.

This sharing is really important for populations not represented in the Learned Hands data (see below). Different communities talk about issues in different ways, and in order for Spot to recognize issues in a text, it needs to have seen them talked about in that way before.

Where does Spot get its data?

Spot builds upon data from the Learned Hands online game, a partnership between the LIT Lab and Stanford's Legal Design Lab. Learned Hands aims to crowdsource the labeling of laypeople's legal questions for the training of machine learning (ML) classifiers/issue spotters. Currently, this labeling is limited to publicly available historic questions from the r/legaladvice forum on Reddit. See Stanford and Suffolk Create Game to Help Drive Access to Justice.

The most recent Learned Hands data can be found here: 2020-12-14_95p-confidence_binary.csv. This labeled data is licensed under a CC BY-NC-SA 4.0 International License. For context, players of Learned Hands are asked to say whether a label is or is not present in a text. These answers are used to calculate a Wilson confidence score interval. In the file linked above, an issue's column contains a 1 if the lower bound of this interval exceeded 50% and 0 if the upper interval dropped below 50%. If the interval for a text straddles 50%, no value is included for that label. That is, the values included are those values where we're 95% confident that more than half of folks playing Learned Hands would agree.

In addition to the data labeled by Learned Hands, users of the API (those building tools with it) have the option to let Spot forget or remember the content of text shared with it. If Spot is given permission to remember a text, we may use it to improve the issue spotter by having humans perform their own issue spotting and using their insights to retrain the issue spotter. See What are the costs and benefits of letting Spot remember user data? above.

Developers are encouraged to consider their use case carefully when deciding how to incorporate end user input regarding the remembering of texts. For most cases it will be prudent to have the end user either opt in or opt out. Only in very limited cases is it appropriate to hard code a universal selection.