A large number of mobile apps rely on user input - whether they are photo sharing tools, social media apps or chat apps. Whilst these kinds of apps are great and have helped created large communities of like-minded people, they also suffer from the inevitable inappropriate content. This talk looks at using AI to reduce the risk of users being exposed to inappropriate images and text. Description: AI is one of the new tools that promises to improve our apps. Using advanced AI, we can detect faces, classify images and understand user-created content. These are all useful features for reducing inappropriate content in modern mobile apps, especially social media apps. In this session we will look at cloud-based AI services and how they can be used in a mobile app built with Xamarin (although the concepts apply for native, React Native and PhoneGap apps as well). We'll see how to do facial detection in photos taken with the camera, then classify those images to filter out the dreaded 'duck face'. To do this filtering we will create an image classifier using online AI tools and see how the model we create can be used either via a remote API call or run on-device using CoreML or TensorFlow. Finally, we'll look at using content moderation to block profanity in text input. By the end of this session you will be well placed to get started using AI to enhance your mobile apps. Note - this session will contain a small amount of strong language used to demo content filtering.