Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

AI Currents: AI Research You Should Know About

AI Currents is a report by Libby Kinsey and Exponential View which aims to showcase and explain the most important recent research in Artificial Intelligence and Machine Learning.

Breakthroughs in AI come at an ever more rapid pace. We, at Exponential View, wanted to build a bridge between the newest research and its relevance for businesses, product leaders and data science teams across the world.

AI Currents is our first attempt at this. To distill the most important papers of the previous calendar quarter into a digestible and accessible guide for the practitioner or unit head. It is not an easy task with more than 60,000 AI papers published last year.

Access the full report here: https://adobe.ly/33NPBhz

  • Als Erste(r) kommentieren

  • Gehören Sie zu den Ersten, denen das gefällt!

AI Currents: AI Research You Should Know About

  1. www.exponentialview.co
  2. Bridging the gap Breakthroughs in AI come at an ever more rapid pace. We, at Exponential View, wanted to build a bridge between the newest research and its relevance for businesses, product leaders and data science teams across the world. AI Currents is our first attempt at this. To distill the most important papers of the previous calendar quarter into a digestible and accessible guide for the practitioner or unit head. It is not an easy task with more than 60,000 AI papers published last year. We’re grateful to Libby Kinsey for stepping up to the task! —- Azeem Azhar, Founder at Exponential View ACCESS THE FULL REPORT HERE
  3. Bridging the gap After twelve years working in technology and VC, I went back to university to study machine learning at UCL. It was 2014, Deepmind had recently been acquired by Google, Andrew Ng’s introductory course on machine learning was already an online sensation, yet mostly, machine learning hadn’t hit the mainstream. Upon graduating, I felt super-powered for all of a few minutes, and then the truth hit me that I didn’t know how to do anything… real. I’d focused so much on learning the maths and implementing the algorithms that I had a lot of context to catch up on. Since then, I’ve been trying to parse what is happening in the world of research into what that means for commercial opportunity, societal impact and widespread adoption. In this first edition of AI Currents, I’ve selected five recent papers and one bigger theme that I think are interesting to this wider perspective. —- Libby Kinsey, AI Researcher and Author of AI Currents ACCESS THE FULL REPORT HERE
  4. ACCESS THE FULL REPORT HERE
  5. 2019: The year transformers hit the big time Progress towards language understanding experienced a leap with the introduction of the ‘Transformer’ by Google in 2017. The Transformer is a deep learning architecture that was designed to increase performance on natural language tasks in an efficient way. Deep neural networks for learning from text previously used layers based on local convolutions and recurrence. These analyse words in the context of a few surrounding words and an approximate compression of words further away. The Transformer model combines point-convolutions with a new mechanism called attention, in particular self-attention, which allows words to be analysed in a much wider context – whole surrounding sentences or paragraphs or more. With it, the team beat previous state-of-the-art models on English-to-French and English-to-German translation benchmarks, at a fraction of the training cost. Read why Transformers matter and what their influence will be over the next year here.
  6. ACCESS THE FULL REPORT HERE
  7. Deep learning for symbolic mathematics G. Lample and F. Charton / December 2019 / paper In Section 1, we saw how Transformers (sequence modelling with attention) have become the dominant approach to language modelling in the last couple of years. They’ve also been applied with success to other domains that use sequential data such as in protein sequencing and in reinforcement learning where a sequence of actions is taken. What’s more surprising is their use here, with mathematical expressions. On the face of it, these aren’t sequences and ought not to be susceptible to a ‘pattern-matching’ approach like this. What I mean by ‘pattern-matching’ is the idea that the Transformer learns associations from a large dataset of examples rather than understanding how to solve differential equations and calculate integrals analytically. It was really non-obvious to me that this approach could work (despite prior work; see ‘More’ below). It’s one thing to accept that it’s possible to convert mathematical expressions into sequence representations; quite another to think that deep learning can do hard maths! Read the full overview of Lample and Charton’s paper and why it matters.
  8. ACCESS THE FULL REPORT HERE
  9. Selective brain damage: Measuring the disparate impact of model pruning S. Hooker, A. Courville, Y. Dauphin, and A. Frome / November 2019 / paper / blog A trained neural network consists of a model architecture and a set of weights (the learned parameters of the model). These are typically large (they can be very large – the largest of Open AI’s pre-trained GPT-2 language model referred to in Section 1 is 6.2GB!). The size inhibits their storage and transmission and limits where they can be deployed. In resource-constrained settings, such as ‘at the edge’, compact models are clearly preferable. With this in mind, methods to compress models have been developed. ‘Model pruning’ is one such method, in which some of the neural network’s weights are removed (set to zero) and hence do not need to be stored (reducing memory requirements) and do not contribute to computation at run time (reducing energy consumption and latency). Rather surprisingly, numerous experiments have shown that removing weights in this way has negligible effect on the performance of the model. This paper highlights that a naive use of pruning in production, one that looks only at overall model performance, might have negative implications for robustness and fairness objectives. Read more about this paper and its implications here.
  10. ACCESS THE FULL REPORT HERE
  11. International Evaluation of an AI System for Breast Cancer Screening S. Mayer McKinney et al. / January 2020 / paper Detecting cancer earlier and more reliably is a hugely emotional topic and one with news value beyond the technical media. It’s refreshing to see a counterpoint to the negative press that has attended fears of AI-driven targeting, deep fakes and tech-accelerated discrimination in recent months. I, for one, am starving for evidence of applications of AI with positive real-world impact. But does the research justify the hype? The first thing to note is that this kind of approach to AI and mammography is not novel; it’s of established interest in academia and the commercial sector There’s still a very long way to go from here to deployment. First, as the authors note, understanding the ‘full extent to which this technology can benefit patient care’ will require clinical studies. That means evaluation of performance in clinically realistic settings, across representative patient cohorts and in randomised controlled trials. Then, if the evidence supports deployment, there are some non-trivial updates to service design and technology integration required to incorporate AI into large screening programmes.
 
 Read more about this paper and its implications.
  12. ACCESS THE FULL REPORT HERE
  13. Mastering Atari, Go, Chess and Shogi by Planning with a Learned Mode J. Schrittwieser et al. / November 2019 / paper / poster Deepmind has been making headlines since 2013 when it first used deep reinforcement learning to play ’70s Atari games such as Pong and Space Invaders. In a series of papers since then, DeepMind has improved its performance in the computer games domain (achieving superhuman performance in many Atari games and StarCraft II) and also smashed records in complex planning games such as chess, Go and Shogi (Japanese chess) that have previously been tackled with ‘brute force’ (that is to say, rules plus processing power, rather than learning). DeepMind’s researchers show with this latest paper that the same algorithm can be used effectively in both domains – planning and visual ones – where before different learning architectures were required. It does this by taking DeepMind’s AlphaZero architecture (which achieved superhuman performance in chess, Go and Shogi in 2017) and adding the capability to learn its own model of the environment. This makes MuZero a general purpose reinforcement learning approach.
 
 Read more about this paper and its implications.
  14. Libby Kinsey Libby is an AI researcher and practitioner. She spent ten years as a VC investing in technology start- ups, and is co-founder of UK AI ecosystem promoter Project Juno. Libby is a Dean's List graduate in Machine Learning from University College London, and has most recently focused her efforts on working with organisations large and small, public and private, to build AI capabilities responsibly.
 Azeem Azhar Azeem is an award-winning entrepreneur, analyst, strategist, investor. He produces Exponential View, the leading newsletter and podcast on the impact of technology on our future economy and society. 
 Marija Gavrilov Marija leads business operations at Exponential View. She is also a producer of the Exponential View podcast (Harvard Business Presents Network).
 Contact: aicurrents@exponentialview.co

×