Cite as: Kath A. Straub, Jullie Clement, Annetta L. Cheek, and Sean P. Mahaffey. Even Lawyers want to understand: Plain language increases lawyers’ credibility with both lawyers and laypeople. Paper presented at IC Clear | Clarity (Belgium, Netherlands) 2014.
At a glance
Background: Benson & Kessler (1987) compared the perceptions of lawyers and judges
about lawyers who write using plain language versus those writing in traditional legal
language. They concluded that lawyers who write in legalese are “likely to have their
work judged as unpersuasive and substantively weak.” Further, “their professional
credentials may be judged less credible.”
Objective: We validate and extend that study to further explore how both lawyers and
non-lawyers perceive lawyers who use legal language versus plain language.
Method: 38 lawyers and 93 non-lawyers each read two short passages, presented as written by a specific lawyer. Then, readers rated the lawyer who wrote the passages on a series of characteristics, including clarity of writing, trustworthiness, pedigree, ability to win
cases, and whether they would be satisfied with that lawyer as their counsel.
Detailed findings in the deck
Conclusions
1. Even lawyers have trouble understanding lawyers. Plain language helps.
2. Lawyers agree that plain language is clearer, more specific, and more persuasive.
3. Law schools still need to teach lawyers to understand two languages: plain and legalese.
4. People--even lawyers--want their lawyers to use plain language.
References:
Robert W. Benson and Joan B. Kessler, Legalese v. Plain English: An Empirical Study of
Persuasion and Credibility in Appellate Brief Writing, 20 Loy. L.A. L. Rev. 301 (1987).
Author Contacts
Kath Straub kath@usability.org < Fielding questions
Julie Clement julieannclement@gmail.com < Presenter
Annetta cheek alcplain@gmail.com
Sean Mahaffey Mahaffsp@mail.uc.edu
CALL ON ➥8923113531 🔝Call Girls Singar Nagar Lucknow best sexual service
Lawyers Prefer Plain Language Over Legalese
1. IC Clear | Clarity2014
‘Learning to be clear’
Even Lawyers want to understand
Plain language increases lawyers’ credibility
to both lawyers and laypeople
Kath Straub – Usability.org
Julie Clement– J Clement Communications
Annetta Cheek – Center for Plain Language
Sean Mahaffey- U Cincinnati College of Law
2. However guilty defendants, upon due
inquiry, might prove to have been,
they were, until convicted, presumed
to be innocent.
Powell v Alabama 287 U.S. 45 (1932).
3. However guilty defendants, upon due
inquiry, might prove to have been,
they were, until convicted, presumed
to be innocent.
Let us help ….
However guilty defendants might
prove to have been at trial, they were
presumed to be innocent until
convicted.
Powell v Alabama 287 U.S. 45 (1932).
4. This study
We set out to replicate Benson & Kessler
to reality check our data collection method.
Along the way, we noticed
some interesting findings.
5. We collected data from
38 lawyers (unpaid volunteers)
93 laypeople (Paid MTURKers)
6. 50%
Participant Demographics
Lawyers (n=38)
Non-Lawyers (n=91)
Gender
Female
Male
Female
Male
Age
Educa1on
Na1ve
Language
18-‐
24
25-‐34
35-‐54
55+
50%
25
–
34
55+
HS
Some
college
BA/BS
Graduate
Degree
Graduate
Degree
English
English
Lithuanian
German
7. Our stimulus were items similar to B&K.
The plain language versions were
cleaned up a bit.
Thanks
to
Joe
Kimble.
8. In
the
findings,
note
• Differences
between
legalese
and
plain
• Similari1es
between
lawyers
and
laypeople
• Whether
people
agree
or
disagree
with
the
statement
10. Is the writer easy to understand?
Lawyers
Non-lawyers
0%
50%
100%
Plain
Legal
Agree
Neutral
Disagree
0%
50%
100%
Plain
Legal
Significant difference
Significant difference
Disagree
for
legalese.
11. How well will clients understand this lawyer?
Lawyers
Non-lawyers
Very Easy
Easy
Hard
Very Hard
0%
50%
100%
Plain
Legal
0%
50%
100%
Plain
Legal
Marginally Significantly different
different (p<.09)
12. How well will juries understand this lawyer?
Lawyers
Non-lawyers
Very Easy
Easy
Hard
Very Hard
0%
50%
100%
Plain
Legal
Significantly different
0%
50%
100%
Plain
Legal
Significantly different
13. How well will other lawyers understand this lawyer?
Lawyers
Non-lawyers
Significantly different
0% 50% 100%
Very Easy
Easy
Hard
Very Hard
Plain
Legal
Significantly different
0%
50%
100%
Plain
Legal
Lawyers
agree
that
legalese
is
hard
to
understand.
14. Takeaways
Lawyers are just plain hard to understand.
Even for other lawyers.
Plain language helps.
20. Did the writer go to a prestigious law school?
Lawyers
Non-lawyers
0%
50%
100%
Agree
Neutral
Disagree
Plain
Legal
0%
50%
100%
Plain
Legal
Significant difference
No difference
Surprise?
Disagree
for
legalese.
21. Was the writer in the top of their class?
Lawyers
Non-lawyers
0%
50%
100%
Agree
Neutral
Disagree
Plain
Legal
0%
50%
100%
Plain
Legal
Significant difference
No difference
Surprise?
Disagree
for
legalese.
22. What level has the writer reached in the firm?
Lawyers
Non-lawyers
0%
50%
100%
Managing Partner
Partner
Associate
Plain
Legal
Staff Attorney
Paralegal
0%
50%
100%
Plain
Legal
Significant difference
No difference
Plain
language
lawyers
do
not
make
Managing
Partner
23. Takeaways
People expect prestigious schools to teach
both languages: Plain and legalese
Plain language helps with cases and clients,
but to run the firm, you need legalese
25. Would you be satisfied with this lawyer as counsel?
Lawyers
Non-lawyers
Marginally Significant difference
0%
50%
100%
Satisfied
Neutral
Dissatisfied
Plain
Legal
0%
50%
100%
Plain
Legal
different (p<.09)
26. Is the writer trustworthy?
Lawyers
Non-lawyers
0%
50%
100%
Agree
Neutral
Disagree
Plain
Legal
0%
50%
100%
Plain
Legal
Significant difference
Not different
Surprise!
27. Does the writer win cases?
Lawyers
Non-lawyers
0%
50%
100%
Agree
Neutral
Disagree
Plain
Legal
0%
50%
100%
Plain
Legal
Marginally different: p<.06
No difference
Again,
non-‐lawyers
are
not
sure
that
plain
language
helps.
29. Our conclusions …
• Even lawyers have trouble understanding lawyers. Plain
language helps.
• Lawyers agree that plain language is clearer, more specific,
and more persuasive.
• Law schools still need to teach lawyers to understand two
languages: plain and legalese.
• People--even lawyers--want their lawyers to use plain
language.
31. If you have more questions, email…
Kath Straub
Usability.org
Center for Plain Language
kath@usability.org
Hinweis der Redaktion
Sean Mahaffey, the L3 who is working on this project sent me this quote.
He said, on the one hand, it made me chuckle
Because of the stuff we’ve been doing [in PL]. But on the other hand, it made
me think, if this is what I’m going to have to read for the rest of my life,
I might want to find another profession…
Really, though, it doesn’t take much effort to find indecipherable language in legal writings.
But this one seemed particularly worth noteworthy if you stop to think about it.
Somebody actually wrote that sentence. With all those commas. And intent.
Sean Mahaffey , the L3 who is working on this project sent me this quote.
He said, on the one hand, it made me chuckle
Because of the stuff we’ve been doing [in PL]. But on the other hand, it made
me think I should find another profession! Quick!
Really, though, it doesn’t take much effort to find indecipherable language in legal writings.
But this one seemed particularly worth noteworthy if you stop to think about it.
Somebody actually wrote that sentence. With all those commas. And intent.
REFER THEM TO HANDOUT
We wanted to start simple ..
To validate w/ lawyers’ as a reality check for our methodology (which is remote surveys, using
Either targeted lists of participants (for lawyers) or MTURK for laypeople. If you are not familiar
With MTURK, it is a “crowd sourced” participant pool that provides access to people (and thereby data)
very quickly and inexpensively. )
Julie > You can decide to defer off questions about that to Annetta or me, or we can talk through it.
And then to extend the study to non-lawyers. Though, we didn’t expect to get significant
findings based on 2 passages.
We did that though – both validate the method and find significant differences
… But also extended B&K a bit … and found some interesting
contrasts between lawyers and laypeople along the way.
You might wonder why non-lawyers? Plain Language is obvious to non-lawyers,
and most of law-in-action is lawyers talking to lawyers.
Our exploration of non-lawyers stemmed from the fact that one member of our team
work with to exonerate individuals who are falsely convicted with DNA evidence.
The program (the Ohio Innocence Project) can help only a small number of people,
Others must represent themselves pro se.
Further, in the US some states (e.g., California) are Pro se. This means that a
Lot of non-lawyers end up reading a lot of copy produced by lawyers.
The number of participants is a methodological detail,
But it is also important to note how small the N is for lawyers
… in the context of the significant differences you will be seeing shortly.
In some ways, our data suggests that its easier to convince lawyers that
communicating clearly matters than it is to convince laypeople.
If you are interested, I can give you details on the demographics.
There is nothing surprising.
There were more men than women for the layers, and reverse for the laypeople
The lawyers were, on average, older than the laypeople.
They were also, not surprisingly more educated.
All but 2 lawyers learned English as their first language. (Those two spoke Lithuanian and German, fwiw).
Note how strongly the difference appears for lawyers.
We should note that we asked “easy to understand” in general
so there may have been some ambiguity about whether the
audience would be lawyers or not.
We will unpack that in a moment.
The next two slides are interesting. If you think about what we are asking,
Both lawyers and non-lawyers are saying that lawyers are hard to understand.
In fact, Plain Language not withstanding, neither group seems to think that clients will understand lawyers.
If nothing else, this should give us serious pause about jury trials.
We should say a few things about the data collection and analysis as we jump into the data:
First, In the study, participants responded using a 5 point lickert scales (pronounced LICK-ert, not LIKE-ert)
People could indicate
strongly agree
Agree
Neutral
Disagree
Strongly disagree
Collapsing the Agrees/Disagrees is actually more conservative with respect to the statistical differences (and it makes the graphs easier to interpret, so we took that route.))
We show the results as a simplified comparison, that is agree, neutral disagree.
Our analysis is based on chi squares
If you are interested in the specific p-values, the more detailed slides will be posted on
Kath Straub’s slide share.
So, the lawyers agree that the plain language is more convincing ….
But laypeople are not.
We expected this finding to be opposite. That is, that laypeople would find lawyers that speak plain more trustworthy,
but that lawyers would not see a distinction.
en.wikipedia.org/wiki/Amazon_Mechanical_Turk
>> Note that we pay above minimum wage, … on average about $12/hour.
Non-lawyers got about 50 cents to do this study.
Amazon Mechanical Turk
From Wikipedia, the free encyclopedia
The Amazon Mechanical Turk (MTurk) is a crowdsourcing Internet marketplace that enables individuals and businesses (known as Requesters) to coordinate the use of human intelligence to perform tasks that computers are currently unable to do. It is one of the sites ofAmazon Web Services. The Requesters are able to post tasks known as HITs (Human Intelligence Tasks), such as choosing the best among several photographs of a storefront, writing product descriptions, or identifying performers on music CDs. Workers (called Providers in Mechanical Turk's Terms of Service, or, more colloquially, Turkers) can then browse among existing tasks and complete them for a monetary payment set by the Requester. To place HITs, the requesting programs use an open application programming interface (API), or the more limited MTurk Requester site.[2] Requesters are restricted to US-based entities.[3]
Contents
[hide]
1 Overview
2 History
3 Applications
3.1 Missing persons searches
3.2 Social science experiments
3.3 Artistic and educational research
4 Third-party programming
4.1 API
5 Related systems
6 Reception
7 See also
8 References
9 External links
Overview[edit]
The name Mechanical Turk comes from "The Turk", a chess-playing automaton of the 18th century, which was made by Wolfgang von Kempelen. It toured Europe, beating the likes ofNapoleon Bonaparte and Benjamin Franklin. It was later revealed that this "machine" was not an automaton at all, but was in fact a chess master hidden in a special compartment controlling its operations. Likewise, the Mechanical Turk web service allows humans to help the machines of today perform tasks for which they are not suited.
Workers set their own hours and are not under any obligation to accept any work they do not wish to do. Because workers are paid as contractors rather than employees, requesters do not have to file forms for, nor pay payroll taxes, and they avoid laws regarding minimum wage, overtime, and workers compensation. Workers, though, must report their income as self-employment income. The average wage for the multiple microtasks assigned, if they are done quickly, is about one dollar an hour, with each task averaging a few cents.[4]
Requesters can ask that Workers fulfill Qualifications before engaging a task, and they can set up a test in order to verify the Qualification. They can also accept or reject the result sent by the Worker, which reflects on the Worker's reputation. Workers can have a postal address anywhere in the world. Payments for completing tasks can be redeemed on Amazon.com via gift certificate (gift certificates are the only payment option available to international workers, apart from India) or be later transferred to a Worker's U.S. bank account. Requesters pay Amazon a 10% commission on the price of successfully completed HITs.[5]
According to a survey conducted in 2008 through one MTurk HIT, Turkers are primarily located in the United States[6] with demographics generally similar to the overall Internet population in the US.[7]
The same author carried out a second survey in 2010 (after the introduction of cash payments for Indian workers), giving new and updated results on the demographics of workers.[8]
History[edit]
The service was initially invented by Peter Cohen for Amazon's internal use, to find duplicates among its web pages describing products.[5]
MTurk was launched publicly on November 2, 2005. Following its launch, the Mechanical Turk user base grew quickly. In early- to mid-November 2005, there were tens of thousands of HITs, all of them uploaded to the system by Amazon itself for some of its internal tasks that required human intelligence. Most of these were related to music CD items.[citation needed] HIT types have expanded to include transcribing, rating, image tagging, surveys, and writing.
In March 2007, there were reportedly more than 100,000 workers in over 100 countries.[5] This increased to over 500,000 workers from over 190 countries in January 2011.[9] In the same year, Techlist published an interactive map pinpointing the locations of 50,000 of their MTurk workers around the world.[10]
Applications[edit]
Missing persons searches[edit]
Since 2007, the service has been used to search for prominent missing individuals. It was first suggested during the search for James Kim, but his body was found before any technical progress was made. That summer, computer scientist Jim Gray disappeared on his yacht and Amazon's Werner Vogels, a personal friend, made arrangements for DigitalGlobe, which provides satellite data for Google Maps and Google Earth, to put recent photography of the Farallon Islands on Mechanical Turk. A front-page story on Digg attracted 12,000 searchers who worked with imaging professionals on the same data. The search was unsuccessful.[11]
In September 2007, a similar arrangement was repeated in the search for aviator Steve Fossett. Satellite data was divided into 85 squared meter sections, and Mechanical Turk users were asked to flag images with "foreign objects" that might be a crash site or other evidence that should be examined more closely.[12] This search was also unsuccessful, partly due to the limited search area. The satellite imagery was mostly within a 50 mile radius.[13] The crash site was eventually found by hikers about a year later, 65 miles away.[14]
Social science experiments[edit]
Beginning in 2010, numerous researchers have explored the viability of Mechanical Turk to recruit subjects of social science experiments. In general, researchers found that while the sample of respondents obtained through Mechanical Turk does not perfectly match characteristics of the U.S. population, it doesn't present a wildly inaccurate view either. They determined that the service works best for random population sampling; it is less successful with studies that require more precisely defined populations.[15][16][17][18] Overall, the US MTurk population is mostly female and white, and is somewhat younger and more educated than the US population overall.
The cost of MTurk was considerably lower than other means of conducting surveys, with workers willing to complete tasks for less than half the US minimum wage.[19]
There are some meta-software platforms built on top of Mechanical Turk for use in behavioral experiments including psiTurk.
Artistic and educational research[edit]
In addition to growing interest from the social sciences, MTurk has also been used as both a tool for artistic and educational exploration. Artist Aaron Koblin has made use of MTurk'scrowdsourcing ability to create a number of collaborative artistic works such as The Sheep Market and Ten Thousand Cents [20] which combined thousands of individual drawings of a US$100 bill.[21] The work functions as a sort of reverse exquisite corpse drawing.
Inspired by Koblin's collaborative artworks a Concordia University graduate research student turned to MTurk to see if the crowdsourcing technology could also be used for educational research. Scott McMaster conducted two pilot projects which used HITs to request drawings; but unlike Koblin's work, the Turkers knew exactly what the drawings were being used for. The HITs required participants to visually represent sets of words in drawings and fill out a short demographic survey. Although the research would be considered in its infancy, McMaster made several findings which suggest that a globalizing effect is taking place within visual cultural representations. It is a published instance of this type of online research into visual culture.[22]
Third-party programming[edit]
Programmers have developed various browser extensions and scripts designed to simplify the process of completing HITs. According to the Amazon Web Services Blog, however, Amazon appears to disapprove of the ones that completely automate the process and preclude the human element.[23] Accounts using so-called automated bots have been banned.
API[edit]
Amazon makes available an application programming interface (API) to give users another access point into the MTurk system. The MTurk API lets a programmer submit HITs to MTurk, retrieve completed work, and approve or reject that work.[24] Web sites and web services can use the API to integrate MTurk work into other web applications, providing users with alternatives to the interface Amazon has built for these functions.
Related systems[edit]
Further information: Crowdsourcing
Amazon coined the term artificial artificial intelligence for processes outsourcing some parts of a computer program to humans, for those tasks carried out much faster by humans than computers. Jeff Bezos was responsible for the concept that led to Amazon's Mechanical Turk being developed to realize this process.[25]
MTurk is comparable in some respects to the now discontinued Google Answers service. However, the Mechanical Turk is a more general marketplace that can potentially help distribute any kind of work tasks all over the world. The Collaborative Human Interpreter (CHI) by Philipp Lenssen also suggested using distributed human intelligence to help computer programs perform tasks that computers cannot do well. MTurk could be used as the execution engine for the CHI.
Reception[edit]
Some Turkers state that they do the work for fun.[26] Because HITs are typically simple, repetitive tasks and users are paid often only a few cents to complete them, some have criticized Mechanical Turk as a "digital sweatshop".[27] The Nation reports that some requesters taken advantage of workers by having them do the tasks, then rejecting their submissions in order to avoid paying.[28]
This is how Bensen actually asked the questions. He handed out a sheet of paper, and had people circle the answers…
And then collected them. Our questions have the same spirit, but are worded differently.
We wanted to start simple .. To replicate B&K, but also extend the study to non-lawyers.
We are interested in non-lawyers because one member of our team does a lot of work
helping falsely accused convicts seek justice.
Bensen talks about persuasiveness
Its probably important to note here that this is highly influenced by the specific passages (which are appended at the end of the deck)