Telehealth.org FINAL DECK 2023 McMenamin & Maheu - Powerpoint Slides - Therapist AI & ChatGPT- How to Use Legally & Ethically JM 14 Sept 1100.pptx

Marlene Maheu
Marlene MaheuExecutive Director at Telehealth.org, LLC um Telehealth.org, Telebehavioral Health Institute, TBHI
Therapist AI
& ChatGPT:
How to Use
Legally &
Ethically
Joseph P. McMenamin, MD, JD, FCLM
Joe McMenamin is a partner at Christian & Barton in
Richmond, Virginia. His practice concentrates on digital health
and on the application of AI in healthcare.
He is an Associate Professor of Legal Medicine at Virginia
Commonwealth University and Board-certified in Legal
Medicine.
Marlene M. Maheu, PhD
Marlene Maheu, PhD has been a pioneer in telemental health
for three decades.
With five textbooks, dozens of book chapters, and journal
articles to her name, she is the Founder and CEO of the
Telebehavioral Health Institute (TBHI).
She is the CEO of the Coalition for Technology in Behavioral
Science (CTiBS), and the Founder of the Journal for
Technology in Behavioral Science.
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
4
And you? Please introduce
yourself with city and
specialty 
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Participants will be able to outline an array of legal and
ethical issues implicated by the use of therapist AI and
ChatGPT.
• Name the primary reason ChatGPT is not likely to replace
psychotherapists in our lifetimes.
• Outline how to best minimize therapist AI and ChatGPT
ethical risks today.
Learning
Objectives
5
Preventing
Interruptions
Maximize your learning by:
• Making a to-do list as we go.
• Turning on your camera & join the
conversation throughout this
activity.
• Muting your phone.
• Asking family and friends to stay
away.
We will not be discussing all slides.
• Mr. McMenamin speaks neither for any legal client nor for Telehealth.org
• Is neither a technical expert nor an Intellectual Property lawyer.
• Offers information about the law, not legal advice.
• Labors under a dearth of legal authorities specific to AI.
Speaker Disclaimers
• Must treat some subjects in cursory fashion only.
• Presents theories of liability as illustrations, conceding nothing as to their
validity.
• Criticizes no person or entity, nor AI.
• In this presentation, neither creates nor seeks to create an attorney-client
relationship with any member of the audience.
Speaker Disclaimers
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
What Uses Can Mental
Health Professionals
Make of AI?
9
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
If you have begun
or are considering
using AI or
ChatGPT in your
work, please
outline those
activities in the
chat box.
10
We will proceed with the presentation while
you do so, then we will come back later.
What are AI and ChatGPT? ?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Three Primary Areas:
1. Information Retrieval and Research
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
3. Client & Patient Education
How are AI & ChatGPT being
used to help healthcare practices?
13
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Programs like Elicit and Claude can provide advanced
research capabilities that exceed traditional methods.
methods.
• For example, AI at Elicit can extract information from up to 100
papers and present the information in a structured table.
• It can find scientific papers on a question or topic and organize
the data collected into a table.
• It can also discover concepts across papers to develop a table
of concepts synthesized from the findings.
1. Information Retrieval and
Research
14
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Ethical Considerations: Ethical research
practices must still apply, ensuring the
retrieved is evidence-based, peer-
to privacy regulations such as HIPAA.
• Issues of ChatGPT copyright ownership
considered, as just because a system
does not mean we should.
1. Information Retrieval and
Research
15
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Programs like OpenAI, Bard, Monica, and others can analyze
and detect behavioral health issues and potential diagnoses
from "prompts, "that is, commands, that include short
include short behavioral descriptions to vast patient datasets.
• They can query for signs of substance use, self-harm,
depression, suicidality, etc.
• They can also engage brainstorming sessions to explore
various possible diagnoses, which facts to collect or areas to
explore to arrive at a definitive diagnosis.
• They can incorporate extensive patient data, including medical
history, psychological assessments, and patient demographics.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
16
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• They use natural language processing (NLP) to extract
relevant information from clinical notes, interviews, and
questionnaires.
• They can be instructed to incorporate structured data such
as diagnostic codes (ICD-10), medication history, and
desired treatment outcomes.
• These chatbots can be given established clinical
guidelines or consensus documents to ask how one's
how one's treatment plan needs to be adjusted to comply
with the guidelines.
• They can also engage brainstorming sessions to explore
various possible diagnoses, which facts to collect or areas to
explore to arrive at a definitive diagnosis.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
17
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Ethical Considerations: All protected health
information (PHI) must be meticulously
uploading any prompts.
• Plus, full transparency must be given to
regarding AI's role in their diagnosis.
• Attention to the strong biases inherent to AI
ensure that AI doesn't perpetuate existing
inequalities.
• HIPAA privacy and copyright laws must also
These requirements take time and attention.
• Practitioners are strongly advised only to
activities after due training.
2. Personalized Case Analysis,
Diagnosis & Treatment Plans
18
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• These chatbots can develop tailored treatment plans to meet
individual patient needs after considering diagnoses, client or
patient preferences, comorbidities, and responses to previous
treatments.
• Ethical Considerations: Legal and ethical standards for
standards for patient privacy, autonomy, and informed consent
must be upheld.
• Free ChatGPT systems often publicly announce in their Terms
and Conditions files that they own all information entered into
their systems.
3. Personalized Treatment Plans
19
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
20
https://telehealth.org/ai-and-mental-health-is-it-a-
game-changer-for-your-practice/
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Depression-clients’ voices.
• OUD-Narx scores and overdose risk rating.
• Digital Therapeutics: CBT for OUD (Pear) 
Bankrupt
• Akili Interactive Labs: Interactive digital games (like
videogames).
 ADHD, Major depression, ASD, MS.
Other Uses of ChatGPT by
Professionals
Is Facebook’s Suicide Prevention Service
“Research”?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Facebook
Innovation
• Technique is
innovative, novel.
• Facebook taught its
algorithm text to
ignore.
• Proprietary:
Details not
available.
• Informed Consent?
(see below)
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Facebook
Accuracy
• Traditional View: Prediction
requires analysis of hundreds of
factors: race, sex, age, SES,
medical history, etc.
• Record of results? Publication?
• Efficacy across races, sexes,
nationalities?
• False Positive: Unwanted psych
care?
• Users: Wariness enhanced?
• Barnett and Torous, Ann. Int. Med.
(2/12/19)
What is AI’s Clinical Reliability?
How AI has helped:
1. Personal Sensing (“Digital Phenotyping”)
Collecting and analyzing data from sensors
(smartphones, wearables, etc.) to identify behaviors,
thoughts, feelings, and traits.
2. Natural language processing
3. Chatbots
D’Alfonso, Curr Opin Psychol. 2020;36:112–117.
1. Machine Learning
• Predict and classify suicidal thoughts, depression,
schizophrenia with ”high accuracy”.
U. Cal and IBM,
https://www.forbes.com/sites/bernardmarr/2023/07/06/ai-in-mental-
mental-health-opportunities-and-challenges-in-developing-intelligent-
intelligent-digital-therapies/
2. Causation v. Correlation
• Better prognosis for pneumonia in asthma patients.
6. Hallucinations
• NEDA’s Tessa: Harmful diet advice to patients with
eating disorders.
7. Generalizability
• When training data do not resemble actual data.
• Watson and chemo.
8. No compassion or empathy
9. No conceptual thinking
10. No common sense
Does AI Threaten Privacy?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Big Data
• Amazon’s Alexa and the NHS: No ? sharing
of patient data.
• Duration of retention of information?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Facebook, again: No opt-in or opt-out.
• Targeted ads?
• HIPAA: N/A. No covered entity, no business
associate.
 Is de-identification obsolete?
• COPPA: N/A: Child committing suicide was less
than 13 years old.
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Privacy laws expanding, yet not clear that existing
laws suffice.
Consider California:
1. HIPAA as amended by HITECH
2. Cal. Confidentiality of Medical Information
Act
3. Cal. Online Privacy Protection Act
4. Cal. Consumer Privacy Act
5. California’s Bot Disclosure Law
6. GDPR
• Yet still not certain the law covers info on
apps.
Facial recognition: both privacy and
discrimination laws.
32
Has AI Generated Any Privacy Litigation?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
PM v OpenAI (N.D. Cal. 2023)
• Purported class action alleges OpenAI
violated users’ privacy rights based on data
scraping of social media comments, chat
logs, cookies, contact info, log-in credentials
and financial info.
Do We Need to License AI to Use it in
Healthcare?
Do We Need to License AI to
Use it in Healthcare?
• Practice of clinical psychology includes but is not limited to: ‘Diagnosis and
treatment of mental and emotional disorders’ which consists of the appropriate
diagnosis of mental disorders according to standards of the profession and the
ordering or providing of treatments according to need.
• Va. Code § 54.1-3600
• Other professions have similar statutes across the 50 states & territories.
Do We Need to License AI to
Use it in Healthcare?
• Definitions of medicine, psychology, nursing, etc.:
• Likely broad enough to encompass AI functions.
• An AI system is not human, but if it functions as a HC professional, some propose
licensure or some other regulatory mechanism.
Do We Need to License AI to
Use it in Healthcare?
If licensure is needed:
• If so, in what jurisdiction(s)?
• Consider scope of practice.
What Does FDA Say About AI in
Healthcare?
What Does FDA Say About AI in Healthcare?
• Regulatory framework is not yet fully developed.
• Historical: Drug or device maker wishing to modify product submits proposal,
and supporting data; FDA says yes or no.
• FDA recognizes potential for drug development and the impediments that fusty
regulation could erect.
What Does Federal Drug Administration (FDA) Say
About AI in Healthcare?
• Concerned with transparency (can it be explained? intellectual property) and
security and integrity of data generated; potential for amplifying errors or
biases.
• FDA urges creation of a risk management plan, and care in choice of training
data, testing, validation.
• Pre-determined change control plans.
FDA Approvals of
AI/ML Devices
What Types of Clinical Decision Software
(“CDS”) Will FDA Regulate Most Closely?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
44
FDA Concerns
1. CDS to “inform clinical management for serious or critical situations or
conditions” especially where the health care provider cannot independently
evaluate basis for recommendation.
2. CDS functions intended for patients to inform clinical management of non-
serious conditions or situations, and not intended to help patients evaluate
basis for recommendations.
3. Software that uses patient’s images to create treatment plans for health care
provider review for patients undergoing RT with external beam or
brachytherapy.
May I Use AI in Hiring?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Yes.
• Resume evaluations.
• Scheduling interviews.
• Sourcing data.
What Have the States to Say About AI in
Employment Decisions?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Most States: Silent so far.
• Ill., Md., and NYC: Employers need candidate’s
consent to use AI in hiring.
• NYC: Must prove to a third-party audit company that
Employer’s process was free of sexual or racial biases.
Can AI Be Liable in Tort?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Not human, and not a legal person.
 Cannot be directly liable for its own
negligence or serve as an agent for vicarious
liability.
• Many different SW and HW developers take part.
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Control hard to determine, given.
• Discreteness: Parts made at different times in
different places without coordination.
• Diffuseness: Developers may not act in conjunction.
Yet: Consider corporations and ships (an “in rem” action
in admiralty law)
Does AI Owe a Duty to Clients?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• For the court.
• In health care, duty arises from professional relationship.
 Can AI have such a relationship?
 Consulting physician who does not interact with the
patient owes no duty to that patient.
See Irvin v. Smith, 31 P.3d 934, 941 (Kan. 2001);
St. John v. Pope, 901 S.W.2d 420, 424 (Tex. 1995)
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Does AI resemble a consultant?
• Or an MRI, e.g.?
 Epic sepsis model missed 2/3 of cases. JAMA IM
6/21
• Beware Automation Bias
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
55
https://telehealth.org/chatgpt-ai-bias/
Can Plaintiffs Impose a Standard of Care
on AI?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
57
• HCP: Reasonableness
 Can AI ever be unreasonable?
 Is the HCP relying on AI immune from
liability?
 Higher SOC for HCP using AI?
 Will AI endanger state standards of
care?
• Will res ipsa play a role?
 Probably not if the harm is
unexplainable, untraceable, and rare.
• Nor can P establish exclusive control
 But what about the auto pilot cases?
Are AI Errors Foreseeable?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
59
• Foreseeability: A precondition of a finding of negligence.
 Law expects actor to take reasonable steps to reduce the risk of
foreseeable harms.
• Software developer cannot predict how unsupervised AI will solve
the tasks and problems it encounters.
 Machine teaches itself how to solve problems in unpredictable
ways.
 No one knows exactly what factors go into AI system’s decisions
• The unforeseeability of AI decisions is itself foreseeable.
Are AI Errors Foreseeable?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
60
• Computational models to generate recommendations are opaque.
 Algorithms may be non-transparent because they rely on rules we
humans cannot understand.
 No one, not even programmers, knows what factors go into ML.
• AI's solution may not have been foreseeable to a human. Even the
human who designed the AI.
 Does that defeat a claim of duty?
Are AI Errors Foreseeable?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
61
• In a black-box AI system, the result of an AI’s decision may not have
been foreseeable to its creator or user.
 So, will an AI system be immune from liability?
 Will its creator?
Are AI Errors Foreseeable?
What if AI Recommends
Non-standard Treatment?
What if AI Recommends Non-standard
Treatment?
• The progress problem: Arterial blood gas monitoring in premature newborns
circa 1990.
• Non-standard advice: Proceed with caution.
 The tension between progress and tort law.
Can I be Liable for My AI’s Mistake?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Can AI be my agent?
 No ability to negotiate the scope of authorization.
 Cannot dissolve agent-principal relationship.
 Cannot renegotiate its terms.
 An agent can refuse agency; A principal can refuse to
be the master.
• Agency law does contemplate that the agent will use her
discretion in carrying out the principal’s tasks.
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Who controls the AI, if anyone?
 AI autonomy is increasing.
• If machine is autonomous, could it not embark on
a frolic and detour beyond the scope of its
employment?
If AI Can be an Agent, What or Who is its
Principal?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Note the decline of the “Captain of the Ship” doctrine.
• Possibilities:
• Component designer?
• Medical device company?
• The owner of the AI’s algorithm?
• Whoever maintains the product?
• Health care professionals?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Possibilities (cont’d):
• Hospitals and health care systems?
• Pharmaceutical companies?
• Professional schools?
• Insurers?
• Regulators?
Could I be Liable for Promoting AI?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
71
• Hospitals: Large investments in robotic
systems, e.g.
 Procedures more expensive.
 By shifting resident teaching time
from standard laparoscopy to robotic
surgery, we may produce “high-cost”
surgeons whom insurers will penalize.
• Damage to the professional
relationship?
 The rapport problem.
Does the Law Require the Patient’s
Informed Consent to Use of AI in Health
Care?
Does the Law Require the Patient’s
Informed Consent to Use of AI in Health
Care?
• Traditional:
 “Every human being of adult years and sound mind has a right to
determine what shall be done with his own body”
Schloendorff v. NY Hospital, 105 N.E. 92 (N.Y. 1914) (Cardozo, J.)
• AI: What disclosures are required?
(cont’d)
• Explain how AI works?
 What does ‘informed’ mean where no-one knows how
black-box AI works?
• Whether the AI system was trained on a data set
representative of a particular patient population?
• Comparative predictive accuracy and error rates of AI system
across patient subgroups?
• Roles human caregivers and the AI system will play during
each part of a procedure?
(cont’d)
• Whether a medtech or pharma company influenced an
algorithm?
• Compare results with AI and human approaches?
 What if there are no data?
• What if the patient doesn’t want to know?
• Provider’s financial interest in the AI used?
• Disclose AI recommendations HCP disapproves, or COIs?
(cont’d)
• Pedicle screw litigation: Used off-label
 At present, nearly all AI is used off-label.
• Investigative nature of the device's use?
 Rights of subjects in clinical trials?
• Experimental procedures: “most frequent risks and hazards” will
remain unknown until the procedure becomes established.
Will Plaintiffs be Able to Prevail on Product
Liability Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
78
• A creature of state law.
 Theories of liability sound in negligence, strict liability, or breach of
warranty.
• Responsibility of a manufacturer, distributor, or seller of a defective
product.
 Is AI a “product” or a service?
 The law has traditionally held that only personal property in
tangible form can be considered “products.”
The law has traditionally considered software to be a service.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
79
• Claimant must prove the item that caused the injury was defective at
the time it left the seller’s hands.
 By definition, ML changes the product over time.
• Suppose an AI system is used to detect abnormalities on MRIs
automatically and is advertised as a way to improve productivity in
analyzing images,
 No problem interpreting high-resolution images but
 Fails with images of lesser quality.
Likely: A products liability claim for both negligence and failure
to warn.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
80
• No matter how good the algorithm is, or how much better it is than a
human, it will occasionally be wrong.
 Exception to strict liability for unavoidably unsafe products.
(Restatement)
• Imposing strict liability: Would likely slow down or cease production
of this technology.
Will Plaintiffs be Able to Prevail on
Product Liability Claims?
Is There a Duty to Warn?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
Duty to warn: Traditional
• Products:
1. Manufacturer knew or should have known that the pro
poses substantial risk to the user.
2. Danger would not be obvious to users.
3. Risk of harm justifies the cost of providing a warning.
• Mental Health:
 Tarasoff v. The Regents of the University of California (1
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• LI Rule:
1. Likelihood harm will occur if intermediary does no
pass on the warning to the ultimate user.
2. Magnitude of the probable harm.
3. Probability that the particular intermediary will no
pass on the warning.
4. Ease or burden of the giving of the warning by th
manufacturer to the ultimate user.
Will Plaintiffs be Able to Prove Causation?
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• Causation will often be tough in AI tort cases.
• Demonstrating the cause of an injury: Already hard in health
care.
 Outcomes frequently probabilistic rather than deterministi
• AI models: Often nonintuitive, even inscrutable.
 Causation even more challenging to demonstrate.
©
1994-2022
Telebehavioral
Health
Institute,
LLC
All
rights
reserved.
• No design or manufacturing flaw if robot involved in
an accident was properly designed, but based on the
structure of the computing architecture, or the
learning taking place in deep neural networks, an
unexpected error or reasoning flaw could have
occurred.
 Mracek v Bryn Mawr Hospital, 610 F. Supp. 2d
401 (E.D. Pa. 2009), aff ‘d, 363 Fed. Appx. 925, 927
(3d Cir. 2010)
Who is an Expert?
Who is an Expert?
• Trial Court: Cardiologist not qualified to testify on weight loss drug combo that
proprietary software package recommended because doctor is not a
software expert.
 Skounakis v. Sotillo A-2403-15T2 (N.J. Super. Ct. App. Div. Mar. 19, 2018)
(on appeal, reversed)
Who is an Expert?
• MD who had performed many robotic surgeries not qualified on causation for
want of programming expertise.
 Mracek v. Bryn Mawr Hospital, 363 F. App'x. 925, 926 (3d Cir. 2010) (ED
complicating robotic prostatectomy)
Marketing: Should We Expect Breach of
Warranty Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
91
• A warranty may arise by an affirmation of fact or a promise made by
seller relating to the product. See U.C.C. § 2-313.
 Need not use special phrases or formal terms (“guarantee”;
“warranty”)
• Promotion of an AI system as a superior product may create a
cause of action for breach of warranty.
 Darringer v. Intuitive Surgical, Inc., No. 5:15-cv-00300-RMW,
2015 U.S. Dist. LEXIS 101230, at *1, *3 (N.D. Cal. Aug. 3, 2015).
(another DaVinci robot case)
Marketing: Should We Expect
Breach of Warranty Claims?
Is AI a Person?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Is AI a Person?
Of course not..
• Artificial agents lack self-consciousness,
human-like intentions, ability to suffer,
rationality, autonomy, understanding, and
social relations deemed necessary for
moral personhood.
But:
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Is AI a Person?
But:
• Could serve useful cost-spreading and
accountability functions.
• EU Parliament, 2017: Recognizing
autonomous robots as “having the status
of electronic persons responsible for
making good any damage they may
cause”.
 Compulsory insurance scheme
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Opponents
• Harm caused by even fully autonomous
technologies is generally reducible to
risks attributable to natural persons or
existing categories of legal persons.
• Even limited AI personhood (corps, e.g.)
will require robust safeguards such as
having funds or assets assigned to the
AI person.
Will Plaintiffs be Able to Impose
Common Enterprise Liability with
AI?
Example: Hall v. Du Pont, 345 F.Supp. 353
(E.D.N.Y. 1972)
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
98
• 1955-’59: Blasting caps injured 13 kids, 12 incidents, 10 states.
• Claim: Failure to warn.
• Ds: 6 cap mfrs + TA.
• Evidence: Acting independently, Ds adhered to industry-wide safety
standard; delegated labeling to TA; industry-wide cooperation in the
manufacture and design of blasting caps.
• Held: If Ps could show made ≥ 1 D mfr made the caps, burden of
proof on causation would shift to Ds.
Example: Hall Du Pont, 345 F.Supp.
353 (E.D.N.Y. 1972)
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
99
• Theory: Clinicians, manufacturers of clinical AI systems, and
hospitals that employ the systems are engaged in a common
enterprise for tort liability purposes.
 As members of common enterprise, could be held jointly liable.
 Used where Ds strategically formed and used corporate entities to
violate consumer protection law. E.g., Fed. Trade Comm'n v.
Pointbreak Media, LLC, 376 F. Supp. 3d 1257, 1287 (S.D. Fla. 2019)
(corporations were considered to be functioning jointly as a common
enterprise)
(cont’d)
How Can We Defend Ourselves
Against Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
101
• Compliance with FDA regulations: Preemption.
• Policy: No product liability claim encompasses the unpredictable,
autonomous machine-mimicking-human behavior underlying AI’s
medical decision-making.
 Unpredictability of autonomous AI is not a bug, but a feature.
How Can We Defend Ourselves
Against Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
102
• Software is not a Product.
 Rodgers v. Christie, 795 F. App'x 878, 878-79 (3rd Cir. 2020):
Public Safety Assessment (PSA), an algorithm that was part of the
state's pretrial release program, was not a product, so product liability
for the murder of a man by a killer on pre-trial release did not lie.
1. Not disseminated commercially.
2. Algorithm was neither “tangible personal property” nor
tenuously “analogous to” it.
How Can We Defend Ourselves
Against Claims?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
103
• Breach of warranty: Privity
 Typically the clinician, and not the patient, purchased system.
• Product misuse, modification: Progress notes, e.g.
 Seller does not know specifics of these additional records or how
algorithm developed following provider’s use.
• LI doctrine
How Can We Defend Ourselves
Against Claims?
Will AI Put Me Out of Work?
Will AI Put Me Out of Work?
• ChatGPT can outperform 1st and 2nd year medical students in
answering challenging clinical care exam questions.
• Law students: Similar.
• But: Probably not.
(cont’d)
• John Halamka: “Generative AI is not thought, it's not
sentience.”
• Most, if not all, countries are experiencing severe clinician
shortages.
 Shortages are only predicted to get worse in the U.S. until at
least 2030.
(cont’d)
• AI-infused precision health tools might well be essential to
improving the efficiency of care.
• AI might help burn-out: ease the day-to-day weariness,
lethargy, and delay of reviewing patient charts.
• The day may come when the SOC requires use of AI.
Can we Get Paid for Using AI?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
109
• Consider a pathology over-read for an in-patient:
• Whether hospital is in- or out-of-network for patient's insurance
• Whether patient's insurer deems AI to be “medically necessary”
• If in-network, what is the negotiated fee for this specific intervention
between this hospital and this patient's insurer
• Whether deal pays for hospitalization per diem or on Diagnosis Related
Group (DRG) basis
• AI might add nothing to charge
• What percentage of co-insurance the patient must pay
• How much of the deductible the patient will have met by end of this
episode of care.
Can We Get Paid for Using AI?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
110
Consider an outpatient setting:
• Whether the outpatient facility is in or out-of-network for the
patient's insurer.
• Whether the facility is owned by a hospital.
 If hospital-owned, may add a “facilities fee”.
• Whether this patient's insurer deems the AI to be “medically
necessary”.
• Negotiated fee schedule between facility and the patient's insurer.
• How much of the deductible the patient will have met by the
conclusion of this episode of care.
Can We Get Paid for Using AI?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
111
• Provided for "medically necessary" care.
• Not: experimental treatments or devices
• Slow governmental adoption: The telehealth model.
Can We Get Paid for Using AI?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
112
• 9/20: CMS approved the 1st add-on payment up to $1,040, + inpatient
hospitalization costs -for use of software to help detect strokes by
Viz.ai
• Whether a 43-patient study used to support the company’s claim of
clinical benefit was large enough to warrant the added
reimbursement?
Can We Get Paid for Using AI?
Can AI Detect or Prevent Fraud?
Can AI Detect or Prevent Fraud?
• One large health insurer reported a savings of $1 billion
annually through AI-prevented FWA.
• Fed. Ct App: Company’s use of AI for prior auth and utilization
management services to MA and Medicaid managed care
plans is subject to qualitative review that may result in liability
for the AI-using entity.
 US ex re v. Evicore Healthcare MSI, LLC. (2d. Cir., 2022)
Can Providers Use AI to Cheat?
Does AI Infringe Copyright?
J. DOE 1 et al. v. GitHub, Inc. et al., Case
No. 4:22-cv-06823-JST (N.D. Cal. 2022):
• Ps: They and class own copyrighted materials made available
publicly on GitHub.
• Ps: Representing class, assert 12 causes of action, including
violations of Digital Millennium Copyright Act, California
Consumer Privacy Act, and breach of contract.
Claim:
• Defendants' OpenAI's Codex and GitHub's Copilot generate
suggestions nearly identical to code scraped from public
GitHub repositories, without giving the attribution required
under the applicable license.
Defenses:
1. Standing. Did these Plaintiffs suffer injury?
2. Intent: Copilot, as a neutral technology, cannot satisfy
DMCA’s § 1202's intent and knowledge requirements.
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
120
https://telehealth.org/ai-copyright-
chatgpt-copyright/
What Other Issues Should We Consider?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
• Ownership of data
• Antitrust
 Algorithmic pricing can be highly
competitive.
 But competitors could use the same
software to collude.
Does AI Engage in Invidious
Discrimination?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
124
Training data key:
• A facial recognition AI software was unable to accurately identify
> 1/3 of BFs in a photo lineup.
 Algorithm was trained on a majority male and white dataset.
Does AI Engage in Invidious
Discrimination?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
125
Optum:
• Algorithm to identify high-risk patients to inform fund
allocation. Used health care costs to make predictions.
 Only 17.7% of black patients were identified as high-risk; true
number should have been ~ 46.5%.
 Spending for black patients lower than for white patients owing
to “unequal access to care”.
Does AI Engage in Invidious
Discrimination?
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
126
• Julia Angwin et al., “Machine Bias,” ProPublica (May 23, 2016),
https://www.propublica.org/article/machine-bias-risk-assessments-
in-criminal-sentencing
• Emily Berman, “A Government of Laws and Not of Machines,” 98
B.U.L. Rev. 1278, 1315, 1316 (2018)
• Karni Chagal-Feferkorn, “The Reasonable Algorithm,” U. Ill. J.L.
Tech. & Pol'y (forthcoming 2018)
• Duke Margolis Center for Health Policy, “Current State and Near-
Term Priorities for AI-Enabled Diagnostic Support Software in
Health Care” (2019)
References
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
127
• Cade Metz and Craig S. Smith, “Warnings of a Dark Side to A.I. in
Health Care,” NY Times (3/21/19)
• Daniel Schiff and Jason theBorenstein, “How Should Clinicians
Communicate With Patients About Roles of Artificially Intelligent
Team Members?” 21(2) AMA Journal of Ethics E138-145 (Feb.
2019)
• Nicolas P. Terry, “Appification, AI, and Healthcare's New Iron
Triangle,” [Automation, Value, and Empathy] 20 J. Health Care L. &
Pol'y 118 (2018)
• Wendell Wallach, A Dangerous Master 239-43 (2015). Andrew Tutt,
“An FDA for Algorithms,” 69 Admin. L. Rev. 83, 104 (2018)
References
©
1994-2023
Telehealth.org,
LLC
All
rights
reserved.
Final
questions?
128
Telehealth.org
contact@telehealth.org
619-255-2788
Keep in touch! 
1 von 129

Recomendados

201 Telehealth Law and Ethical Issues Finished 6.7.23.pptx von
201 Telehealth Law and Ethical Issues Finished 6.7.23.pptx201 Telehealth Law and Ethical Issues Finished 6.7.23.pptx
201 Telehealth Law and Ethical Issues Finished 6.7.23.pptxMarlene Maheu
760 views223 Folien
774W - How Can I Legally Practice Telehealth Over State Lines and Internation... von
774W - How Can I Legally Practice Telehealth Over State Lines and Internation...774W - How Can I Legally Practice Telehealth Over State Lines and Internation...
774W - How Can I Legally Practice Telehealth Over State Lines and Internation...Marlene Maheu
137 views48 Folien
A12_Beyond_HIPAA_PPT1 von
A12_Beyond_HIPAA_PPT1A12_Beyond_HIPAA_PPT1
A12_Beyond_HIPAA_PPT1Brad Tritle, CIPP
49 views41 Folien
Hipaa.ppt5 von
Hipaa.ppt5Hipaa.ppt5
Hipaa.ppt5akwei2
162 views70 Folien
Hipaa.ppt4 von
Hipaa.ppt4Hipaa.ppt4
Hipaa.ppt4akwei2
209 views70 Folien
Hipaa.ppt1 von
Hipaa.ppt1Hipaa.ppt1
Hipaa.ppt1akwei2
302 views70 Folien

Más contenido relacionado

Similar a Telehealth.org FINAL DECK 2023 McMenamin & Maheu - Powerpoint Slides - Therapist AI & ChatGPT- How to Use Legally & Ethically JM 14 Sept 1100.pptx

Hipaa.ppt3 von
Hipaa.ppt3Hipaa.ppt3
Hipaa.ppt3akwei2
185 views70 Folien
Dustin HIPAA von
Dustin HIPAADustin HIPAA
Dustin HIPAADustin Kinzinger
432 views16 Folien
People, health professionals and health information Working together in 2014 von
People, health professionals and health information Working together in 2014People, health professionals and health information Working together in 2014
People, health professionals and health information Working together in 2014Health Informatics New Zealand
363 views23 Folien
HIPAA Compliant Social Media for Professionals von
HIPAA Compliant Social Media for ProfessionalsHIPAA Compliant Social Media for Professionals
HIPAA Compliant Social Media for ProfessionalsMarlene Maheu
131 views38 Folien
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac... von
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...Marlene Maheu
30 views62 Folien
Privacy, Confidentiality, and Security Lecture 2_slides von
Privacy, Confidentiality, and Security Lecture 2_slidesPrivacy, Confidentiality, and Security Lecture 2_slides
Privacy, Confidentiality, and Security Lecture 2_slidesZakCooper1
41 views27 Folien

Similar a Telehealth.org FINAL DECK 2023 McMenamin & Maheu - Powerpoint Slides - Therapist AI & ChatGPT- How to Use Legally & Ethically JM 14 Sept 1100.pptx(20)

Hipaa.ppt3 von akwei2
Hipaa.ppt3Hipaa.ppt3
Hipaa.ppt3
akwei2185 views
HIPAA Compliant Social Media for Professionals von Marlene Maheu
HIPAA Compliant Social Media for ProfessionalsHIPAA Compliant Social Media for Professionals
HIPAA Compliant Social Media for Professionals
Marlene Maheu131 views
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac... von Marlene Maheu
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...
7 Tips for Educating Patients/ Clients for Telehealth & Teletherapy Best Prac...
Marlene Maheu30 views
Privacy, Confidentiality, and Security Lecture 2_slides von ZakCooper1
Privacy, Confidentiality, and Security Lecture 2_slidesPrivacy, Confidentiality, and Security Lecture 2_slides
Privacy, Confidentiality, and Security Lecture 2_slides
ZakCooper141 views
Person-generated health data: How can it help us to feel better? von Kathleen Gray
Person-generated health data: How can it help us to feel better?Person-generated health data: How can it help us to feel better?
Person-generated health data: How can it help us to feel better?
Kathleen Gray180 views
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counseling von Tom Wilson
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counselingMaheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counseling
Maheu+ica+2014+legal+&+ethical+strategies+for+successful+distance+counseling
Tom Wilson1.2K views
Smartphone Apps - Evidence Based Considerations for Psychology von Marlene Maheu
Smartphone Apps  - Evidence Based Considerations for PsychologySmartphone Apps  - Evidence Based Considerations for Psychology
Smartphone Apps - Evidence Based Considerations for Psychology
Marlene Maheu987 views
Information systems for health decision making - a citizen's perspective von Erdem Yazganoglu
Information systems for health decision making - a citizen's perspectiveInformation systems for health decision making - a citizen's perspective
Information systems for health decision making - a citizen's perspective
Erdem Yazganoglu1.1K views
Telehealth Clinical Best Practices Workshop I 5 23-2020 von Marlene Maheu
Telehealth Clinical Best Practices Workshop I 5 23-2020Telehealth Clinical Best Practices Workshop I 5 23-2020
Telehealth Clinical Best Practices Workshop I 5 23-2020
Marlene Maheu566 views
Przybysz, reinhardt ph rgroupproject_fall_2012 von jlreinhardt
Przybysz, reinhardt ph rgroupproject_fall_2012Przybysz, reinhardt ph rgroupproject_fall_2012
Przybysz, reinhardt ph rgroupproject_fall_2012
jlreinhardt511 views
In search of a digital health compass: My data, my decision, our power von chronaki
In search of a digital health compass: My data, my decision, our powerIn search of a digital health compass: My data, my decision, our power
In search of a digital health compass: My data, my decision, our power
chronaki464 views
Care data against von 3GDR
Care data   againstCare data   against
Care data against
3GDR580 views
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptx von Marlene Maheu
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptxLesson 2 Setting Up Your Video-Based Office for Telehealth.pptx
Lesson 2 Setting Up Your Video-Based Office for Telehealth.pptx
Marlene Maheu139 views
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ... von Marlene Maheu
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...
Slides for Telehealth How to Legally and Ethically Practice Over State Lines ...
Marlene Maheu548 views
HIPAA Boot Camp: A Step-by-Step Guide to Achieving Compliance von Conference Panel
HIPAA Boot Camp: A Step-by-Step Guide to Achieving ComplianceHIPAA Boot Camp: A Step-by-Step Guide to Achieving Compliance
HIPAA Boot Camp: A Step-by-Step Guide to Achieving Compliance
Conference Panel14 views
The Ethics of Digital Health von Megan Ranney
The Ethics of Digital HealthThe Ethics of Digital Health
The Ethics of Digital Health
Megan Ranney2.7K views
Aadhar card purva saxena von Purva Saxena
Aadhar card  purva saxenaAadhar card  purva saxena
Aadhar card purva saxena
Purva Saxena258 views

Más de Marlene Maheu

When Sex Gets Complicated: Porn, Affairs, & Cybersex von
When Sex Gets Complicated: Porn, Affairs, & CybersexWhen Sex Gets Complicated: Porn, Affairs, & Cybersex
When Sex Gets Complicated: Porn, Affairs, & CybersexMarlene Maheu
118 views73 Folien
Post-Test Tutorial Unlimited Attempts.pdf von
Post-Test Tutorial Unlimited Attempts.pdfPost-Test Tutorial Unlimited Attempts.pdf
Post-Test Tutorial Unlimited Attempts.pdfMarlene Maheu
349 views12 Folien
Post-Test Tutorial 3 Attempts.pdf von
Post-Test Tutorial 3 Attempts.pdfPost-Test Tutorial 3 Attempts.pdf
Post-Test Tutorial 3 Attempts.pdfMarlene Maheu
214 views12 Folien
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg... von
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...Marlene Maheu
15 views128 Folien
HIPAA Compliant Cybersecurity: Practical Implementation Tips von
HIPAA Compliant Cybersecurity: Practical Implementation TipsHIPAA Compliant Cybersecurity: Practical Implementation Tips
HIPAA Compliant Cybersecurity: Practical Implementation TipsMarlene Maheu
3 views30 Folien
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof... von
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...Marlene Maheu
12 views69 Folien

Más de Marlene Maheu(20)

When Sex Gets Complicated: Porn, Affairs, & Cybersex von Marlene Maheu
When Sex Gets Complicated: Porn, Affairs, & CybersexWhen Sex Gets Complicated: Porn, Affairs, & Cybersex
When Sex Gets Complicated: Porn, Affairs, & Cybersex
Marlene Maheu118 views
Post-Test Tutorial Unlimited Attempts.pdf von Marlene Maheu
Post-Test Tutorial Unlimited Attempts.pdfPost-Test Tutorial Unlimited Attempts.pdf
Post-Test Tutorial Unlimited Attempts.pdf
Marlene Maheu349 views
Post-Test Tutorial 3 Attempts.pdf von Marlene Maheu
Post-Test Tutorial 3 Attempts.pdfPost-Test Tutorial 3 Attempts.pdf
Post-Test Tutorial 3 Attempts.pdf
Marlene Maheu214 views
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg... von Marlene Maheu
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...
McMenamin - Slidedeck for Slideshare - Therapist AI & ChatGPT- How to Use Leg...
Marlene Maheu15 views
HIPAA Compliant Cybersecurity: Practical Implementation Tips von Marlene Maheu
HIPAA Compliant Cybersecurity: Practical Implementation TipsHIPAA Compliant Cybersecurity: Practical Implementation Tips
HIPAA Compliant Cybersecurity: Practical Implementation Tips
Marlene Maheu3 views
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof... von Marlene Maheu
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...
802b_Lesson 5_Developing eBooks Audio Books_Kindles and More for Maximum Prof...
Marlene Maheu12 views
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx von Marlene Maheu
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx
802b_Lesson_4_Developing Webinars & Podcasts for Maximum Profit.pptx
Marlene Maheu26 views
Developing eBooks Audio Books, Kindles and More for Maximum Profit von Marlene Maheu
Developing eBooks Audio Books, Kindles and More for Maximum ProfitDeveloping eBooks Audio Books, Kindles and More for Maximum Profit
Developing eBooks Audio Books, Kindles and More for Maximum Profit
Marlene Maheu13 views
Developing Webinars & Podcasts for Maximum Profit von Marlene Maheu
Developing Webinars & Podcasts for Maximum ProfitDeveloping Webinars & Podcasts for Maximum Profit
Developing Webinars & Podcasts for Maximum Profit
Marlene Maheu30 views
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptx von Marlene Maheu
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptxAdvanced Telehealth Clinical Best Practices: Crisis Planning.pptx
Advanced Telehealth Clinical Best Practices: Crisis Planning.pptx
Marlene Maheu3 views
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth... von Marlene Maheu
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...
Maheu & Armstrong - 2023 - Using Apps for Clinical Care 5 Steps to Legal, Eth...
Marlene Maheu125 views
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac... von Marlene Maheu
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...
Cyber Security: Top 5 Things You Can Do Tomorrow Morning to Protect Your Prac...
Marlene Maheu91 views
Understanding Fundamentals of Behavioral Legal & Ethical Marketing von Marlene Maheu
Understanding Fundamentals of Behavioral Legal & Ethical MarketingUnderstanding Fundamentals of Behavioral Legal & Ethical Marketing
Understanding Fundamentals of Behavioral Legal & Ethical Marketing
Marlene Maheu109 views
Identifying Your Optimal Niche Focus von Marlene Maheu
Identifying Your Optimal Niche FocusIdentifying Your Optimal Niche Focus
Identifying Your Optimal Niche Focus
Marlene Maheu119 views
Telehealth Jobs from Home.pptx von Marlene Maheu
Telehealth Jobs from Home.pptxTelehealth Jobs from Home.pptx
Telehealth Jobs from Home.pptx
Marlene Maheu149 views
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum Profit von Marlene Maheu
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum ProfitLesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum Profit
Lesson 5 Developing eBooks Audio Books, Kindles, and More for Maximum Profit
Marlene Maheu72 views
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O... von Marlene Maheu
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...
Lesson 4 - Marketing Your Telehealth Services Successful, Legal and Ethical O...
Marlene Maheu74 views
Lesson 3 - Optimizing Your Website von Marlene Maheu
Lesson 3 - Optimizing Your Website Lesson 3 - Optimizing Your Website
Lesson 3 - Optimizing Your Website
Marlene Maheu113 views
767W Redo_Telehealth Jobs from Home.pptx von Marlene Maheu
767W Redo_Telehealth Jobs from Home.pptx767W Redo_Telehealth Jobs from Home.pptx
767W Redo_Telehealth Jobs from Home.pptx
Marlene Maheu30 views
Autism Telehealth webinar slide deck - 03122022.pptx von Marlene Maheu
Autism Telehealth webinar slide deck - 03122022.pptxAutism Telehealth webinar slide deck - 03122022.pptx
Autism Telehealth webinar slide deck - 03122022.pptx
Marlene Maheu33 views

Último

STRATEGIC MANAGEMENT MODULE 1_UNIT1 _UNIT2.pdf von
STRATEGIC MANAGEMENT MODULE 1_UNIT1 _UNIT2.pdfSTRATEGIC MANAGEMENT MODULE 1_UNIT1 _UNIT2.pdf
STRATEGIC MANAGEMENT MODULE 1_UNIT1 _UNIT2.pdfDr Vijay Vishwakarma
136 views68 Folien
Ask The Expert! Nonprofit Website Tools, Tips, and Technology.pdf von
 Ask The Expert! Nonprofit Website Tools, Tips, and Technology.pdf Ask The Expert! Nonprofit Website Tools, Tips, and Technology.pdf
Ask The Expert! Nonprofit Website Tools, Tips, and Technology.pdfTechSoup
67 views28 Folien
OOPs - JAVA Quick Reference.pdf von
OOPs - JAVA Quick Reference.pdfOOPs - JAVA Quick Reference.pdf
OOPs - JAVA Quick Reference.pdfArthyR3
76 views66 Folien
The Picture Of A Photograph von
The Picture Of A PhotographThe Picture Of A Photograph
The Picture Of A PhotographEvelyn Donaldson
38 views81 Folien
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37 von
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37MysoreMuleSoftMeetup
55 views17 Folien

Último(20)

Ask The Expert! Nonprofit Website Tools, Tips, and Technology.pdf von TechSoup
 Ask The Expert! Nonprofit Website Tools, Tips, and Technology.pdf Ask The Expert! Nonprofit Website Tools, Tips, and Technology.pdf
Ask The Expert! Nonprofit Website Tools, Tips, and Technology.pdf
TechSoup 67 views
OOPs - JAVA Quick Reference.pdf von ArthyR3
OOPs - JAVA Quick Reference.pdfOOPs - JAVA Quick Reference.pdf
OOPs - JAVA Quick Reference.pdf
ArthyR376 views
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37 von MysoreMuleSoftMeetup
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37
UNIT NO 13 ORGANISMS AND POPULATION.pptx von Madhuri Bhande
UNIT NO 13 ORGANISMS AND POPULATION.pptxUNIT NO 13 ORGANISMS AND POPULATION.pptx
UNIT NO 13 ORGANISMS AND POPULATION.pptx
Madhuri Bhande48 views
Introduction to AERO Supply Chain - #BEAERO Trainning program von Guennoun Wajih
Introduction to AERO Supply Chain  - #BEAERO Trainning programIntroduction to AERO Supply Chain  - #BEAERO Trainning program
Introduction to AERO Supply Chain - #BEAERO Trainning program
Guennoun Wajih135 views
Research Methodology (M. Pharm, IIIrd Sem.)_UNIT_IV_CPCSEA Guidelines for Lab... von RAHUL PAL
Research Methodology (M. Pharm, IIIrd Sem.)_UNIT_IV_CPCSEA Guidelines for Lab...Research Methodology (M. Pharm, IIIrd Sem.)_UNIT_IV_CPCSEA Guidelines for Lab...
Research Methodology (M. Pharm, IIIrd Sem.)_UNIT_IV_CPCSEA Guidelines for Lab...
RAHUL PAL45 views
11.21.23 Economic Precarity and Global Economic Forces.pptx von mary850239
11.21.23 Economic Precarity and Global Economic Forces.pptx11.21.23 Economic Precarity and Global Economic Forces.pptx
11.21.23 Economic Precarity and Global Economic Forces.pptx
mary85023994 views
What is Digital Transformation? von Mark Brown
What is Digital Transformation?What is Digital Transformation?
What is Digital Transformation?
Mark Brown46 views
The Future of Micro-credentials: Is Small Really Beautiful? von Mark Brown
The Future of Micro-credentials:  Is Small Really Beautiful?The Future of Micro-credentials:  Is Small Really Beautiful?
The Future of Micro-credentials: Is Small Really Beautiful?
Mark Brown121 views
Creative Restart 2023: Christophe Wechsler - From the Inside Out: Cultivating... von Taste
Creative Restart 2023: Christophe Wechsler - From the Inside Out: Cultivating...Creative Restart 2023: Christophe Wechsler - From the Inside Out: Cultivating...
Creative Restart 2023: Christophe Wechsler - From the Inside Out: Cultivating...
Taste39 views
ANGULARJS.pdf von ArthyR3
ANGULARJS.pdfANGULARJS.pdf
ANGULARJS.pdf
ArthyR354 views
Peripheral artery diseases by Dr. Garvit.pptx von garvitnanecha
Peripheral artery diseases by Dr. Garvit.pptxPeripheral artery diseases by Dr. Garvit.pptx
Peripheral artery diseases by Dr. Garvit.pptx
garvitnanecha135 views

Telehealth.org FINAL DECK 2023 McMenamin & Maheu - Powerpoint Slides - Therapist AI & ChatGPT- How to Use Legally & Ethically JM 14 Sept 1100.pptx

  • 1. Therapist AI & ChatGPT: How to Use Legally & Ethically
  • 2. Joseph P. McMenamin, MD, JD, FCLM Joe McMenamin is a partner at Christian & Barton in Richmond, Virginia. His practice concentrates on digital health and on the application of AI in healthcare. He is an Associate Professor of Legal Medicine at Virginia Commonwealth University and Board-certified in Legal Medicine.
  • 3. Marlene M. Maheu, PhD Marlene Maheu, PhD has been a pioneer in telemental health for three decades. With five textbooks, dozens of book chapters, and journal articles to her name, she is the Founder and CEO of the Telebehavioral Health Institute (TBHI). She is the CEO of the Coalition for Technology in Behavioral Science (CTiBS), and the Founder of the Journal for Technology in Behavioral Science.
  • 4. © 1994-2023 Telehealth.org, LLC All rights reserved. 4 And you? Please introduce yourself with city and specialty 
  • 5. © 1994-2023 Telehealth.org, LLC All rights reserved. • Participants will be able to outline an array of legal and ethical issues implicated by the use of therapist AI and ChatGPT. • Name the primary reason ChatGPT is not likely to replace psychotherapists in our lifetimes. • Outline how to best minimize therapist AI and ChatGPT ethical risks today. Learning Objectives 5
  • 6. Preventing Interruptions Maximize your learning by: • Making a to-do list as we go. • Turning on your camera & join the conversation throughout this activity. • Muting your phone. • Asking family and friends to stay away. We will not be discussing all slides.
  • 7. • Mr. McMenamin speaks neither for any legal client nor for Telehealth.org • Is neither a technical expert nor an Intellectual Property lawyer. • Offers information about the law, not legal advice. • Labors under a dearth of legal authorities specific to AI. Speaker Disclaimers
  • 8. • Must treat some subjects in cursory fashion only. • Presents theories of liability as illustrations, conceding nothing as to their validity. • Criticizes no person or entity, nor AI. • In this presentation, neither creates nor seeks to create an attorney-client relationship with any member of the audience. Speaker Disclaimers
  • 10. © 1994-2023 Telehealth.org, LLC All rights reserved. If you have begun or are considering using AI or ChatGPT in your work, please outline those activities in the chat box. 10
  • 11. We will proceed with the presentation while you do so, then we will come back later.
  • 12. What are AI and ChatGPT? ?
  • 13. © 1994-2023 Telehealth.org, LLC All rights reserved. Three Primary Areas: 1. Information Retrieval and Research 2. Personalized Case Analysis, Diagnosis & Treatment Plans 3. Client & Patient Education How are AI & ChatGPT being used to help healthcare practices? 13
  • 14. © 1994-2023 Telehealth.org, LLC All rights reserved. • Programs like Elicit and Claude can provide advanced research capabilities that exceed traditional methods. methods. • For example, AI at Elicit can extract information from up to 100 papers and present the information in a structured table. • It can find scientific papers on a question or topic and organize the data collected into a table. • It can also discover concepts across papers to develop a table of concepts synthesized from the findings. 1. Information Retrieval and Research 14
  • 15. © 1994-2023 Telehealth.org, LLC All rights reserved. • Ethical Considerations: Ethical research practices must still apply, ensuring the retrieved is evidence-based, peer- to privacy regulations such as HIPAA. • Issues of ChatGPT copyright ownership considered, as just because a system does not mean we should. 1. Information Retrieval and Research 15
  • 16. © 1994-2023 Telehealth.org, LLC All rights reserved. • Programs like OpenAI, Bard, Monica, and others can analyze and detect behavioral health issues and potential diagnoses from "prompts, "that is, commands, that include short include short behavioral descriptions to vast patient datasets. • They can query for signs of substance use, self-harm, depression, suicidality, etc. • They can also engage brainstorming sessions to explore various possible diagnoses, which facts to collect or areas to explore to arrive at a definitive diagnosis. • They can incorporate extensive patient data, including medical history, psychological assessments, and patient demographics. 2. Personalized Case Analysis, Diagnosis & Treatment Plans 16
  • 17. © 1994-2023 Telehealth.org, LLC All rights reserved. • They use natural language processing (NLP) to extract relevant information from clinical notes, interviews, and questionnaires. • They can be instructed to incorporate structured data such as diagnostic codes (ICD-10), medication history, and desired treatment outcomes. • These chatbots can be given established clinical guidelines or consensus documents to ask how one's how one's treatment plan needs to be adjusted to comply with the guidelines. • They can also engage brainstorming sessions to explore various possible diagnoses, which facts to collect or areas to explore to arrive at a definitive diagnosis. 2. Personalized Case Analysis, Diagnosis & Treatment Plans 17
  • 18. © 1994-2023 Telehealth.org, LLC All rights reserved. • Ethical Considerations: All protected health information (PHI) must be meticulously uploading any prompts. • Plus, full transparency must be given to regarding AI's role in their diagnosis. • Attention to the strong biases inherent to AI ensure that AI doesn't perpetuate existing inequalities. • HIPAA privacy and copyright laws must also These requirements take time and attention. • Practitioners are strongly advised only to activities after due training. 2. Personalized Case Analysis, Diagnosis & Treatment Plans 18
  • 19. © 1994-2023 Telehealth.org, LLC All rights reserved. • These chatbots can develop tailored treatment plans to meet individual patient needs after considering diagnoses, client or patient preferences, comorbidities, and responses to previous treatments. • Ethical Considerations: Legal and ethical standards for standards for patient privacy, autonomy, and informed consent must be upheld. • Free ChatGPT systems often publicly announce in their Terms and Conditions files that they own all information entered into their systems. 3. Personalized Treatment Plans 19
  • 21. © 1994-2023 Telehealth.org, LLC All rights reserved. • Depression-clients’ voices. • OUD-Narx scores and overdose risk rating. • Digital Therapeutics: CBT for OUD (Pear)  Bankrupt • Akili Interactive Labs: Interactive digital games (like videogames).  ADHD, Major depression, ASD, MS. Other Uses of ChatGPT by Professionals
  • 22. Is Facebook’s Suicide Prevention Service “Research”?
  • 23. © 1994-2023 Telehealth.org, LLC All rights reserved. Facebook Innovation • Technique is innovative, novel. • Facebook taught its algorithm text to ignore. • Proprietary: Details not available. • Informed Consent? (see below)
  • 24. © 1994-2023 Telehealth.org, LLC All rights reserved. Facebook Accuracy • Traditional View: Prediction requires analysis of hundreds of factors: race, sex, age, SES, medical history, etc. • Record of results? Publication? • Efficacy across races, sexes, nationalities? • False Positive: Unwanted psych care? • Users: Wariness enhanced? • Barnett and Torous, Ann. Int. Med. (2/12/19)
  • 25. What is AI’s Clinical Reliability?
  • 26. How AI has helped: 1. Personal Sensing (“Digital Phenotyping”) Collecting and analyzing data from sensors (smartphones, wearables, etc.) to identify behaviors, thoughts, feelings, and traits. 2. Natural language processing 3. Chatbots D’Alfonso, Curr Opin Psychol. 2020;36:112–117.
  • 27. 1. Machine Learning • Predict and classify suicidal thoughts, depression, schizophrenia with ”high accuracy”. U. Cal and IBM, https://www.forbes.com/sites/bernardmarr/2023/07/06/ai-in-mental- mental-health-opportunities-and-challenges-in-developing-intelligent- intelligent-digital-therapies/ 2. Causation v. Correlation • Better prognosis for pneumonia in asthma patients.
  • 28. 6. Hallucinations • NEDA’s Tessa: Harmful diet advice to patients with eating disorders. 7. Generalizability • When training data do not resemble actual data. • Watson and chemo. 8. No compassion or empathy 9. No conceptual thinking 10. No common sense
  • 29. Does AI Threaten Privacy?
  • 30. © 1994-2023 Telehealth.org, LLC All rights reserved. Big Data • Amazon’s Alexa and the NHS: No ? sharing of patient data. • Duration of retention of information?
  • 31. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Facebook, again: No opt-in or opt-out. • Targeted ads? • HIPAA: N/A. No covered entity, no business associate.  Is de-identification obsolete? • COPPA: N/A: Child committing suicide was less than 13 years old.
  • 32. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. Privacy laws expanding, yet not clear that existing laws suffice. Consider California: 1. HIPAA as amended by HITECH 2. Cal. Confidentiality of Medical Information Act 3. Cal. Online Privacy Protection Act 4. Cal. Consumer Privacy Act 5. California’s Bot Disclosure Law 6. GDPR • Yet still not certain the law covers info on apps. Facial recognition: both privacy and discrimination laws. 32
  • 33. Has AI Generated Any Privacy Litigation?
  • 34. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. PM v OpenAI (N.D. Cal. 2023) • Purported class action alleges OpenAI violated users’ privacy rights based on data scraping of social media comments, chat logs, cookies, contact info, log-in credentials and financial info.
  • 35. Do We Need to License AI to Use it in Healthcare?
  • 36. Do We Need to License AI to Use it in Healthcare? • Practice of clinical psychology includes but is not limited to: ‘Diagnosis and treatment of mental and emotional disorders’ which consists of the appropriate diagnosis of mental disorders according to standards of the profession and the ordering or providing of treatments according to need. • Va. Code § 54.1-3600 • Other professions have similar statutes across the 50 states & territories.
  • 37. Do We Need to License AI to Use it in Healthcare? • Definitions of medicine, psychology, nursing, etc.: • Likely broad enough to encompass AI functions. • An AI system is not human, but if it functions as a HC professional, some propose licensure or some other regulatory mechanism.
  • 38. Do We Need to License AI to Use it in Healthcare? If licensure is needed: • If so, in what jurisdiction(s)? • Consider scope of practice.
  • 39. What Does FDA Say About AI in Healthcare?
  • 40. What Does FDA Say About AI in Healthcare? • Regulatory framework is not yet fully developed. • Historical: Drug or device maker wishing to modify product submits proposal, and supporting data; FDA says yes or no. • FDA recognizes potential for drug development and the impediments that fusty regulation could erect.
  • 41. What Does Federal Drug Administration (FDA) Say About AI in Healthcare? • Concerned with transparency (can it be explained? intellectual property) and security and integrity of data generated; potential for amplifying errors or biases. • FDA urges creation of a risk management plan, and care in choice of training data, testing, validation. • Pre-determined change control plans.
  • 43. What Types of Clinical Decision Software (“CDS”) Will FDA Regulate Most Closely?
  • 44. © 1994-2023 Telehealth.org, LLC All rights reserved. 44 FDA Concerns 1. CDS to “inform clinical management for serious or critical situations or conditions” especially where the health care provider cannot independently evaluate basis for recommendation. 2. CDS functions intended for patients to inform clinical management of non- serious conditions or situations, and not intended to help patients evaluate basis for recommendations. 3. Software that uses patient’s images to create treatment plans for health care provider review for patients undergoing RT with external beam or brachytherapy.
  • 45. May I Use AI in Hiring?
  • 47. What Have the States to Say About AI in Employment Decisions?
  • 48. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Most States: Silent so far. • Ill., Md., and NYC: Employers need candidate’s consent to use AI in hiring. • NYC: Must prove to a third-party audit company that Employer’s process was free of sexual or racial biases.
  • 49. Can AI Be Liable in Tort?
  • 50. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Not human, and not a legal person.  Cannot be directly liable for its own negligence or serve as an agent for vicarious liability. • Many different SW and HW developers take part.
  • 51. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Control hard to determine, given. • Discreteness: Parts made at different times in different places without coordination. • Diffuseness: Developers may not act in conjunction. Yet: Consider corporations and ships (an “in rem” action in admiralty law)
  • 52. Does AI Owe a Duty to Clients?
  • 53. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • For the court. • In health care, duty arises from professional relationship.  Can AI have such a relationship?  Consulting physician who does not interact with the patient owes no duty to that patient. See Irvin v. Smith, 31 P.3d 934, 941 (Kan. 2001); St. John v. Pope, 901 S.W.2d 420, 424 (Tex. 1995)
  • 54. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Does AI resemble a consultant? • Or an MRI, e.g.?  Epic sepsis model missed 2/3 of cases. JAMA IM 6/21 • Beware Automation Bias
  • 56. Can Plaintiffs Impose a Standard of Care on AI?
  • 57. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. 57 • HCP: Reasonableness  Can AI ever be unreasonable?  Is the HCP relying on AI immune from liability?  Higher SOC for HCP using AI?  Will AI endanger state standards of care? • Will res ipsa play a role?  Probably not if the harm is unexplainable, untraceable, and rare. • Nor can P establish exclusive control  But what about the auto pilot cases?
  • 58. Are AI Errors Foreseeable?
  • 59. © 1994-2023 Telehealth.org, LLC All rights reserved. 59 • Foreseeability: A precondition of a finding of negligence.  Law expects actor to take reasonable steps to reduce the risk of foreseeable harms. • Software developer cannot predict how unsupervised AI will solve the tasks and problems it encounters.  Machine teaches itself how to solve problems in unpredictable ways.  No one knows exactly what factors go into AI system’s decisions • The unforeseeability of AI decisions is itself foreseeable. Are AI Errors Foreseeable?
  • 60. © 1994-2023 Telehealth.org, LLC All rights reserved. 60 • Computational models to generate recommendations are opaque.  Algorithms may be non-transparent because they rely on rules we humans cannot understand.  No one, not even programmers, knows what factors go into ML. • AI's solution may not have been foreseeable to a human. Even the human who designed the AI.  Does that defeat a claim of duty? Are AI Errors Foreseeable?
  • 61. © 1994-2023 Telehealth.org, LLC All rights reserved. 61 • In a black-box AI system, the result of an AI’s decision may not have been foreseeable to its creator or user.  So, will an AI system be immune from liability?  Will its creator? Are AI Errors Foreseeable?
  • 62. What if AI Recommends Non-standard Treatment?
  • 63. What if AI Recommends Non-standard Treatment? • The progress problem: Arterial blood gas monitoring in premature newborns circa 1990. • Non-standard advice: Proceed with caution.  The tension between progress and tort law.
  • 64. Can I be Liable for My AI’s Mistake?
  • 65. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Can AI be my agent?  No ability to negotiate the scope of authorization.  Cannot dissolve agent-principal relationship.  Cannot renegotiate its terms.  An agent can refuse agency; A principal can refuse to be the master. • Agency law does contemplate that the agent will use her discretion in carrying out the principal’s tasks.
  • 66. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Who controls the AI, if anyone?  AI autonomy is increasing. • If machine is autonomous, could it not embark on a frolic and detour beyond the scope of its employment?
  • 67. If AI Can be an Agent, What or Who is its Principal?
  • 68. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Note the decline of the “Captain of the Ship” doctrine. • Possibilities: • Component designer? • Medical device company? • The owner of the AI’s algorithm? • Whoever maintains the product? • Health care professionals?
  • 69. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. Possibilities (cont’d): • Hospitals and health care systems? • Pharmaceutical companies? • Professional schools? • Insurers? • Regulators?
  • 70. Could I be Liable for Promoting AI?
  • 71. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. 71 • Hospitals: Large investments in robotic systems, e.g.  Procedures more expensive.  By shifting resident teaching time from standard laparoscopy to robotic surgery, we may produce “high-cost” surgeons whom insurers will penalize. • Damage to the professional relationship?  The rapport problem.
  • 72. Does the Law Require the Patient’s Informed Consent to Use of AI in Health Care?
  • 73. Does the Law Require the Patient’s Informed Consent to Use of AI in Health Care? • Traditional:  “Every human being of adult years and sound mind has a right to determine what shall be done with his own body” Schloendorff v. NY Hospital, 105 N.E. 92 (N.Y. 1914) (Cardozo, J.) • AI: What disclosures are required?
  • 74. (cont’d) • Explain how AI works?  What does ‘informed’ mean where no-one knows how black-box AI works? • Whether the AI system was trained on a data set representative of a particular patient population? • Comparative predictive accuracy and error rates of AI system across patient subgroups? • Roles human caregivers and the AI system will play during each part of a procedure?
  • 75. (cont’d) • Whether a medtech or pharma company influenced an algorithm? • Compare results with AI and human approaches?  What if there are no data? • What if the patient doesn’t want to know? • Provider’s financial interest in the AI used? • Disclose AI recommendations HCP disapproves, or COIs?
  • 76. (cont’d) • Pedicle screw litigation: Used off-label  At present, nearly all AI is used off-label. • Investigative nature of the device's use?  Rights of subjects in clinical trials? • Experimental procedures: “most frequent risks and hazards” will remain unknown until the procedure becomes established.
  • 77. Will Plaintiffs be Able to Prevail on Product Liability Claims?
  • 78. © 1994-2023 Telehealth.org, LLC All rights reserved. 78 • A creature of state law.  Theories of liability sound in negligence, strict liability, or breach of warranty. • Responsibility of a manufacturer, distributor, or seller of a defective product.  Is AI a “product” or a service?  The law has traditionally held that only personal property in tangible form can be considered “products.” The law has traditionally considered software to be a service. Will Plaintiffs be Able to Prevail on Product Liability Claims?
  • 79. © 1994-2023 Telehealth.org, LLC All rights reserved. 79 • Claimant must prove the item that caused the injury was defective at the time it left the seller’s hands.  By definition, ML changes the product over time. • Suppose an AI system is used to detect abnormalities on MRIs automatically and is advertised as a way to improve productivity in analyzing images,  No problem interpreting high-resolution images but  Fails with images of lesser quality. Likely: A products liability claim for both negligence and failure to warn. Will Plaintiffs be Able to Prevail on Product Liability Claims?
  • 80. © 1994-2023 Telehealth.org, LLC All rights reserved. 80 • No matter how good the algorithm is, or how much better it is than a human, it will occasionally be wrong.  Exception to strict liability for unavoidably unsafe products. (Restatement) • Imposing strict liability: Would likely slow down or cease production of this technology. Will Plaintiffs be Able to Prevail on Product Liability Claims?
  • 81. Is There a Duty to Warn?
  • 82. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. Duty to warn: Traditional • Products: 1. Manufacturer knew or should have known that the pro poses substantial risk to the user. 2. Danger would not be obvious to users. 3. Risk of harm justifies the cost of providing a warning. • Mental Health:  Tarasoff v. The Regents of the University of California (1
  • 83. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • LI Rule: 1. Likelihood harm will occur if intermediary does no pass on the warning to the ultimate user. 2. Magnitude of the probable harm. 3. Probability that the particular intermediary will no pass on the warning. 4. Ease or burden of the giving of the warning by th manufacturer to the ultimate user.
  • 84. Will Plaintiffs be Able to Prove Causation?
  • 85. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • Causation will often be tough in AI tort cases. • Demonstrating the cause of an injury: Already hard in health care.  Outcomes frequently probabilistic rather than deterministi • AI models: Often nonintuitive, even inscrutable.  Causation even more challenging to demonstrate.
  • 86. © 1994-2022 Telebehavioral Health Institute, LLC All rights reserved. • No design or manufacturing flaw if robot involved in an accident was properly designed, but based on the structure of the computing architecture, or the learning taking place in deep neural networks, an unexpected error or reasoning flaw could have occurred.  Mracek v Bryn Mawr Hospital, 610 F. Supp. 2d 401 (E.D. Pa. 2009), aff ‘d, 363 Fed. Appx. 925, 927 (3d Cir. 2010)
  • 87. Who is an Expert?
  • 88. Who is an Expert? • Trial Court: Cardiologist not qualified to testify on weight loss drug combo that proprietary software package recommended because doctor is not a software expert.  Skounakis v. Sotillo A-2403-15T2 (N.J. Super. Ct. App. Div. Mar. 19, 2018) (on appeal, reversed)
  • 89. Who is an Expert? • MD who had performed many robotic surgeries not qualified on causation for want of programming expertise.  Mracek v. Bryn Mawr Hospital, 363 F. App'x. 925, 926 (3d Cir. 2010) (ED complicating robotic prostatectomy)
  • 90. Marketing: Should We Expect Breach of Warranty Claims?
  • 91. © 1994-2023 Telehealth.org, LLC All rights reserved. 91 • A warranty may arise by an affirmation of fact or a promise made by seller relating to the product. See U.C.C. § 2-313.  Need not use special phrases or formal terms (“guarantee”; “warranty”) • Promotion of an AI system as a superior product may create a cause of action for breach of warranty.  Darringer v. Intuitive Surgical, Inc., No. 5:15-cv-00300-RMW, 2015 U.S. Dist. LEXIS 101230, at *1, *3 (N.D. Cal. Aug. 3, 2015). (another DaVinci robot case) Marketing: Should We Expect Breach of Warranty Claims?
  • 92. Is AI a Person?
  • 93. © 1994-2023 Telehealth.org, LLC All rights reserved. Is AI a Person? Of course not.. • Artificial agents lack self-consciousness, human-like intentions, ability to suffer, rationality, autonomy, understanding, and social relations deemed necessary for moral personhood. But:
  • 94. © 1994-2023 Telehealth.org, LLC All rights reserved. Is AI a Person? But: • Could serve useful cost-spreading and accountability functions. • EU Parliament, 2017: Recognizing autonomous robots as “having the status of electronic persons responsible for making good any damage they may cause”.  Compulsory insurance scheme
  • 95. © 1994-2023 Telehealth.org, LLC All rights reserved. • Opponents • Harm caused by even fully autonomous technologies is generally reducible to risks attributable to natural persons or existing categories of legal persons. • Even limited AI personhood (corps, e.g.) will require robust safeguards such as having funds or assets assigned to the AI person.
  • 96. Will Plaintiffs be Able to Impose Common Enterprise Liability with AI?
  • 97. Example: Hall v. Du Pont, 345 F.Supp. 353 (E.D.N.Y. 1972)
  • 98. © 1994-2023 Telehealth.org, LLC All rights reserved. 98 • 1955-’59: Blasting caps injured 13 kids, 12 incidents, 10 states. • Claim: Failure to warn. • Ds: 6 cap mfrs + TA. • Evidence: Acting independently, Ds adhered to industry-wide safety standard; delegated labeling to TA; industry-wide cooperation in the manufacture and design of blasting caps. • Held: If Ps could show made ≥ 1 D mfr made the caps, burden of proof on causation would shift to Ds. Example: Hall Du Pont, 345 F.Supp. 353 (E.D.N.Y. 1972)
  • 99. © 1994-2023 Telehealth.org, LLC All rights reserved. 99 • Theory: Clinicians, manufacturers of clinical AI systems, and hospitals that employ the systems are engaged in a common enterprise for tort liability purposes.  As members of common enterprise, could be held jointly liable.  Used where Ds strategically formed and used corporate entities to violate consumer protection law. E.g., Fed. Trade Comm'n v. Pointbreak Media, LLC, 376 F. Supp. 3d 1257, 1287 (S.D. Fla. 2019) (corporations were considered to be functioning jointly as a common enterprise) (cont’d)
  • 100. How Can We Defend Ourselves Against Claims?
  • 101. © 1994-2023 Telehealth.org, LLC All rights reserved. 101 • Compliance with FDA regulations: Preemption. • Policy: No product liability claim encompasses the unpredictable, autonomous machine-mimicking-human behavior underlying AI’s medical decision-making.  Unpredictability of autonomous AI is not a bug, but a feature. How Can We Defend Ourselves Against Claims?
  • 102. © 1994-2023 Telehealth.org, LLC All rights reserved. 102 • Software is not a Product.  Rodgers v. Christie, 795 F. App'x 878, 878-79 (3rd Cir. 2020): Public Safety Assessment (PSA), an algorithm that was part of the state's pretrial release program, was not a product, so product liability for the murder of a man by a killer on pre-trial release did not lie. 1. Not disseminated commercially. 2. Algorithm was neither “tangible personal property” nor tenuously “analogous to” it. How Can We Defend Ourselves Against Claims?
  • 103. © 1994-2023 Telehealth.org, LLC All rights reserved. 103 • Breach of warranty: Privity  Typically the clinician, and not the patient, purchased system. • Product misuse, modification: Progress notes, e.g.  Seller does not know specifics of these additional records or how algorithm developed following provider’s use. • LI doctrine How Can We Defend Ourselves Against Claims?
  • 104. Will AI Put Me Out of Work?
  • 105. Will AI Put Me Out of Work? • ChatGPT can outperform 1st and 2nd year medical students in answering challenging clinical care exam questions. • Law students: Similar. • But: Probably not.
  • 106. (cont’d) • John Halamka: “Generative AI is not thought, it's not sentience.” • Most, if not all, countries are experiencing severe clinician shortages.  Shortages are only predicted to get worse in the U.S. until at least 2030.
  • 107. (cont’d) • AI-infused precision health tools might well be essential to improving the efficiency of care. • AI might help burn-out: ease the day-to-day weariness, lethargy, and delay of reviewing patient charts. • The day may come when the SOC requires use of AI.
  • 108. Can we Get Paid for Using AI?
  • 109. © 1994-2023 Telehealth.org, LLC All rights reserved. 109 • Consider a pathology over-read for an in-patient: • Whether hospital is in- or out-of-network for patient's insurance • Whether patient's insurer deems AI to be “medically necessary” • If in-network, what is the negotiated fee for this specific intervention between this hospital and this patient's insurer • Whether deal pays for hospitalization per diem or on Diagnosis Related Group (DRG) basis • AI might add nothing to charge • What percentage of co-insurance the patient must pay • How much of the deductible the patient will have met by end of this episode of care. Can We Get Paid for Using AI?
  • 110. © 1994-2023 Telehealth.org, LLC All rights reserved. 110 Consider an outpatient setting: • Whether the outpatient facility is in or out-of-network for the patient's insurer. • Whether the facility is owned by a hospital.  If hospital-owned, may add a “facilities fee”. • Whether this patient's insurer deems the AI to be “medically necessary”. • Negotiated fee schedule between facility and the patient's insurer. • How much of the deductible the patient will have met by the conclusion of this episode of care. Can We Get Paid for Using AI?
  • 111. © 1994-2023 Telehealth.org, LLC All rights reserved. 111 • Provided for "medically necessary" care. • Not: experimental treatments or devices • Slow governmental adoption: The telehealth model. Can We Get Paid for Using AI?
  • 112. © 1994-2023 Telehealth.org, LLC All rights reserved. 112 • 9/20: CMS approved the 1st add-on payment up to $1,040, + inpatient hospitalization costs -for use of software to help detect strokes by Viz.ai • Whether a 43-patient study used to support the company’s claim of clinical benefit was large enough to warrant the added reimbursement? Can We Get Paid for Using AI?
  • 113. Can AI Detect or Prevent Fraud?
  • 114. Can AI Detect or Prevent Fraud? • One large health insurer reported a savings of $1 billion annually through AI-prevented FWA. • Fed. Ct App: Company’s use of AI for prior auth and utilization management services to MA and Medicaid managed care plans is subject to qualitative review that may result in liability for the AI-using entity.  US ex re v. Evicore Healthcare MSI, LLC. (2d. Cir., 2022)
  • 115. Can Providers Use AI to Cheat?
  • 116. Does AI Infringe Copyright?
  • 117. J. DOE 1 et al. v. GitHub, Inc. et al., Case No. 4:22-cv-06823-JST (N.D. Cal. 2022): • Ps: They and class own copyrighted materials made available publicly on GitHub. • Ps: Representing class, assert 12 causes of action, including violations of Digital Millennium Copyright Act, California Consumer Privacy Act, and breach of contract.
  • 118. Claim: • Defendants' OpenAI's Codex and GitHub's Copilot generate suggestions nearly identical to code scraped from public GitHub repositories, without giving the attribution required under the applicable license.
  • 119. Defenses: 1. Standing. Did these Plaintiffs suffer injury? 2. Intent: Copilot, as a neutral technology, cannot satisfy DMCA’s § 1202's intent and knowledge requirements.
  • 121. What Other Issues Should We Consider?
  • 122. © 1994-2023 Telehealth.org, LLC All rights reserved. • Ownership of data • Antitrust  Algorithmic pricing can be highly competitive.  But competitors could use the same software to collude.
  • 123. Does AI Engage in Invidious Discrimination?
  • 124. © 1994-2023 Telehealth.org, LLC All rights reserved. 124 Training data key: • A facial recognition AI software was unable to accurately identify > 1/3 of BFs in a photo lineup.  Algorithm was trained on a majority male and white dataset. Does AI Engage in Invidious Discrimination?
  • 125. © 1994-2023 Telehealth.org, LLC All rights reserved. 125 Optum: • Algorithm to identify high-risk patients to inform fund allocation. Used health care costs to make predictions.  Only 17.7% of black patients were identified as high-risk; true number should have been ~ 46.5%.  Spending for black patients lower than for white patients owing to “unequal access to care”. Does AI Engage in Invidious Discrimination?
  • 126. © 1994-2023 Telehealth.org, LLC All rights reserved. 126 • Julia Angwin et al., “Machine Bias,” ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments- in-criminal-sentencing • Emily Berman, “A Government of Laws and Not of Machines,” 98 B.U.L. Rev. 1278, 1315, 1316 (2018) • Karni Chagal-Feferkorn, “The Reasonable Algorithm,” U. Ill. J.L. Tech. & Pol'y (forthcoming 2018) • Duke Margolis Center for Health Policy, “Current State and Near- Term Priorities for AI-Enabled Diagnostic Support Software in Health Care” (2019) References
  • 127. © 1994-2023 Telehealth.org, LLC All rights reserved. 127 • Cade Metz and Craig S. Smith, “Warnings of a Dark Side to A.I. in Health Care,” NY Times (3/21/19) • Daniel Schiff and Jason theBorenstein, “How Should Clinicians Communicate With Patients About Roles of Artificially Intelligent Team Members?” 21(2) AMA Journal of Ethics E138-145 (Feb. 2019) • Nicolas P. Terry, “Appification, AI, and Healthcare's New Iron Triangle,” [Automation, Value, and Empathy] 20 J. Health Care L. & Pol'y 118 (2018) • Wendell Wallach, A Dangerous Master 239-43 (2015). Andrew Tutt, “An FDA for Algorithms,” 69 Admin. L. Rev. 83, 104 (2018) References