As artificial intelligence (AI) continues to advance at an unprecedented rate, there is an urgent need to consider its ethical implications. AI has the potential to revolutionize many industries, but it also presents new challenges and risks. Programmers play a critical role in shaping the development and deployment of AI, and must therefore be aware of its ethical implications.
In this session, we will explore the ethical considerations of AI and its impact on society. We will examine the challenges of developing and deploying AI systems that are unbiased, transparent, and accountable. We will discuss the ethical implications of AI in different fields, including healthcare, finance, and transportation. We will also explore the role of programmers in ensuring that AI is developed and deployed ethically.
Finally, we will discuss best practices for ethical implementation of AI, including designing for transparency, fairness, and accountability. We will also consider the importance of ethical decision-making frameworks and guidelines for AI development and deployment. By the end of this session, participants will have a better understanding of the ethical implications of AI and how they can play a role in ensuring that AI is developed and deployed ethically.
2. Ethics
The What
Ethics examines the rational justification for our moral judgments; it studies
what is morally right or wrong, just or unjust. In a broader sense, ethics
reflects on human beings and their interaction with nature and with other
humans, on freedom, on responsibility and on justice.
https://www.canada.ca/en/treasury-board-secretariat/services/values-ethics/code/what-is-ethics.html
3. Ethics
The Importance
Ethics is what guides us to tell the truth, keep our promises, or help
someone in need. There is a framework of ethics underlying our lives on a
daily basis, helping us make decisions that create positive impacts and
steering us away from unjust outcomes
https://rahuleducation.org/our-scribes/ethics/
4.
5. Isaac Asimov’s Laws of Robotics
• Law One – “A robot may not injure a human being or,
through inaction, allow a human being to come to harm.”
• Law Two – “A robot must obey orders given to it by
human beings except where such orders would conflict
with the First Law.”
• Law Three – “A robot must protect its own existence, as
long as such protection does not conflict with the First or
Second Law.”
• Asimov later added the “Zeroth Law,” above all the others
– “A robot may not harm humanity, or, by inaction, allow
humanity to come to harm.”
https://www.brookings.edu/articles/isaac-asimovs-laws-of-robotics-are-wrong/
10. Types of AI
Three different cute robots one is Dark Blue one is yellow one is red all looking at each other confused--ar 16:9 - Image #2 @jeff_mcwherter
21. Systemic discrimination involves the procedures, routines and
organizational culture of any organisation that, often without intent,
contribute to less favourable outcomes for minority groups than for
the majority of the population, from the organisation’s policies,
programs, employment, and services.
https://www.coe.int/en/web/interculturalcities/systemic-discrimination
22. What Can We Do To Fix
• Build it Better
• More Data
• More Diverse Data
• Police it Better
Gender recognition AI, no matter how inclusive we try to make it, will always be premised
because someone can look at you and tell you what your deal is, and that is always going to be
incorrect.
23. Data Flatting Problem
what is an image that relates to stealing
cute red robot in an Art Heist: An artistic representation of a sophisticated art theft, showing thieves taking valuable paintings from a museum or gallery, possibly
wearing black suits and masks, which adds an element of drama and intrigue. --ar 16:9
24. Create an image with Jason from Friday the 13th stunting cheerleaders using the style of Steve Jencks from Lansing Michigan and screamprints.com.
27. Distribution of Wealth
cute red robot that looks like the monopoly man, with a top hat and monocle with gold coins in the background like scrooge mcducks money bin --ar 16:9 - Variations (Strong) by @je
28. Environmental Problem
cute red robot holding a dead daisy that is drooping, with factories in the background with billowing smokestacks --ar 16:9 - Image #3 @jeff_mcwherter
29. What Can We Do To Fix
• Regulation - privacy laws, antitrust laws, copyright
• Data-Owning Democracy - you decide what to do
• Digital Socialism - owned by society
Both Data-Owning Democracy and Digital Socialism recommend the redistribution of property
to increase citizens' economic power because AI changes our concept of private property.
30. If we truly believe humans are equal, we need to
do better.
31. The Alignment Problem
Making variations for image #4 with prompt robot picking a daisy in a green field --ar 16:9 - Image #4 @jeff_mcwherter
32. cute red robot, creating another robot in a workshop --ar 16:9 - Image #2 @jeff_mcwherter
Conclusions
33. When people talk about ethical AI, instead of
talking Skynet; we should think about working
conditions, climate change, and how to make
the economy serve humans rather than the
other way around.
Abigail Thorn -Philosophy Tube
-The rules were introduced in his 1942 short story "Runaround"
-We need to start wrestling with the ethics of the people behind the machines.
https://www.brookings.edu/articles/isaac-asimovs-laws-of-robotics-are-wrong/
-Digital conciseness that surpasses humanity and might do away with us
-Does not exist
-AI is hot, show Mrs. Davis-Instead of focusing on a future existential crisis let’s focus on something more tangible.
-Making sure AI aligns with Human Values
-It’s the idea that AI systems’ goals may not align with those of humans, a problem that would be heightened if super intelligent AI systems are developed
https://spectrum.ieee.org/the-alignment-problem-openai
-Algorithms - rank job applications, credit score, parole
-Security system- recognize faces and voices
-Text and image systems - Chat GPT and DaLLELLMModels LLaMA, to Alpaca, to Vicuna
Bard and ChatGPT
Before we get to AGI, there are ethical issues with these types of AI
-BiasExplorer WEBSITE
https://www.aclu.org/news/womens-rights/why-amazons-automated-hiring-tool-discriminated-against
-Men must be better applications because they get more jobs
Tools used for Risk Assessments
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
https://www.thedailybeast.com/openais-impressive-chatgpt-chatbot-is-not-immune-to-racism
-you get denied something and you don’t know why
-EU laws on transparency
-Up until a few years ago, did not have standard way to document models-Its important to document what it can do, and what it cant.
https://huggingface.co/meta-llama/Meta-Llama-3-8B
https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool
-If an AI makes a decision about your life, you be entitled to know why it did that, and the right for an explanation
-A counterfactual explanation describes a causal situation in the form: “If X had not occurred, Y would not have occurred”-If you didn’t spell the company name wrong….you would have got the job
-What kind of expiation do you want…do you want to see the code…or tell you what it did in your case?
-Black box so complex mystery to their creator, we could guess if you spelled it right, you got the job
-Models are so complex and programmers don’t understand them they can build a 2nd surrogate giant model to explain the 1st model
-Fairwashing is the idea that a company who designed an AI keeps the 'brain' of the AI hidden, claims it's fair, but often knows that it is in fact, heavily biased to produce specific results.
-Concept introduced by a group of scientist in 2019
-Full Body Scanner -
-Just before you step inside, someone looks at you and then pushes a button Blue or Pink
-The scanner looks for lumps that might indicate something hidden under your close
-If you are a transgender woman with a penis and breasts …this presents a dilemma
-You may have to answer humiliating questions, out yourself as trans, or even get grouped
-The system sees the penis on a woman as strange because CIS gender, folks who are not trans, people built it
-Ordinary people cannot challenge organizations that misuse AI
-AI and all Technology is used in an already imperfect society
-The world is not black and white, and the machine expects that
-Technology encodes ways of seeing
-Epidermalization comes from Franz Fanton, while in France found people called him strange and dangerous because he was black
-They were projecting meaning onto the color of his skin
-At any time, he could have the meaning of his being assigned to him by a white person
-Part of being in a racial minority is a white person tell you what your deal is-Digital Epidermalization is a concept introduced by researcher Simon Brown-We assume the machine is neutral and fair
-Machines detect information that is already there
-Machine tells you “thats strange”
-People assume the machine is neutral
-was not tested thoroughly?-Not enough diverse people working on the software?
Not enough training data that was diverseWhat does it have to do with AI?
-Facial recognition, Gender detection
-Facial recognition optimized to white faces
-Gender recognition….You can’t always tell gender by their face, it may work well enough, but it might be useful most of the time
-It encodes a way of seeing, particle with trans people.
-That’s unfortunate; they are a small minority. a mistake that keeps to a group of people…that not a mistake…that systemic discrimination
Also called institutionalized discrimination
-AI is used in an imperfect society
-Solutions may not be technical
-No content is given
-Its just data
-Training data is made from and by people
-Common Crawl
-Hard to find proof https://spawning.ai/
When it comes to training AI models, however, the use of copyrighted materials is fair game. That’s because of a fair use law that permits the use of copyrighted material under certain conditions without needing the permission of the owner. But pending lawsuits could change this.
Common Crawl WEBSITE
-Sexualizes cheer leaders…. No male cheer leaders-I think the style is close-Putting the character in the situation
-haveibeentrained.com
From spawning
-Lion-5b ….in hot water for having child abuse photos in the data model
-Opt out, stability creators of stable diffusion and then hugging face the large data model repository respecting
-Common Crawl dump
-microwork
-Amazon Mechanical Turk
-difficult to unionize
-Jobs getting de-skilled atomized, more surveillance, less worker power, and generally shitty
-sub-employment
-You are freer when you work for humans-Chat GPT paid people in Kenya 2$ an hour to train the model-It’s not just the money; think of the hate topics they had to read all day
https://time.com/6247678/openai-chatgpt-kenya-workers/
OpenAI used outsourced Kenyan laborers earning less than $2 per hour, a TIME investigation has found.
https://time.com/6247678/openai-chatgpt-kenya-workers/-Data annotation.tech WEBSITE
Estimated $500 billion in new household wealth by the year 2045
Georgists-property taxes should be eliminated, replaced with a tax on land
Sam Altman says expect land and corp tax, This money should be used to buy shares in the company, and given to citizens as a universal income
Moore's Law for Everything sam Altman
-non-renewal metal mining and CO2
-Mining causes damage to people and the environmental
-Training an AI can take as much as the energy it costs for 30 homes over one year …and 25 tons of CO2, which is like driving your car 25 times around the planet-Each time you query that AI model, it comes with a cost to our planet
-Big models like chatGPT emit over 20x more carbon.
-When you are doing high processing jobs, do you know where the engird comes from?
-Canada a lot of hydro energy. When generating your model where you do it matters. -CodeCarbon.io WEBSITE
-AI is used in an imperfect society
-Solutions may not be technical
-Consent-AI does not exist in a vacuum, it is part of so society and can impact people.
-We need to create tools for tracking and understanding ai better.
-Important that A( stays accessible so we know it works and when it does not work.
-When we create tools to measure AI impacts we can get an idea of how bad it is-We can start creating guard rails
-Companies can use it because this model is more sustainable or may you select models because it respects copyright
-Understand now large-scale computing needs resources and labor
-Even if we built a poorly aligned AGI, its power would not come from technology but from human systems in which technology is embedded.
-Power of a hostile AGI would still depend on its relationship to our existing political and social systems
-Focus on current tangle impacts
Can we make ethical AI? …..
-who is we?
-are we equipped as a society to make ethical changes?
-Its not enough to pay lip service, we need to act