Bridging the gap in enterprise AI - we give a quick three-step introduction to the topic:
1) How to take a down to earth approach to enterprise AI
2) What gaps are there that your company needs to tackle?
3) We introduce one bridge from a zoo of tools: SKIL. In particular, we talk about recent development in SKIL's Python client, enabling Data Scientists
3. Skymind overview
â—Ź Deep learning for enterprise
â—Ź Globally distributed, remote
â—Ź Creators of Deeplearning4j (DL4J)
â—Ź 300k+ downloads per month
4. â—Ź A bit too busy on GitHub
â—Ź DL4J & Keras core dev team
â—Ź Author of elephas
â—Ź Author of betago
â—Ź Hyperopt maintainer
â—Ź Coursera instructor
● Author of “Deep Learning and the
Game of Go”
â—Ź ...
About me
5. AI time-to-value is a big problem today
The right tooling can speed up the lifecycle
Building models is not the problem
Bold claims for kick-off
6. Enterprise AI
“The question of whether machines can think is about as
relevant as the question of whether submarines can swim.”
- E. Dijkstra
7. A paradigm shift at work
Data
Algorithm
Output
Data
Output
Algorithm
vs
8. What are the drivers?
â—Ź Data
â—Ź Hardware & Compute
â—Ź Theoretical advances
â—Ź Software
12. Political
â—Ź Strategic alignment (e.g. DS vs. Product)
â—Ź Resource allocation
â—Ź Wrong expectations
â—Ź Not willing to pay the price (talent, infrastructure,...)
Cold start
13. Cultural / Team
● Separation of “Science” and “Engineering”
â—Ź Data scientist lack realistic view
â—Ź Cool stuff never gets used
● Data scientists don’t want to do the dirty parts
â—Ź Experiments faster than product integration
â—Ź Lack of expertise
â—Ź Difficult to hire
Model hand over
15. Data & Infrastructure
â—Ź Separate infrastructures
â—Ź Insufficient structure
â—Ź Difficult access
â—Ź Inconsistencies and errors
â—Ź Data wrangling biggest part of the equation
Pipeline jungles
16. Production & Monitoring
â—Ź Model needs to go to production
â—Ź Models needs to stay there, up-to-date
â—Ź Often clumsy process
â—Ź Track model performance & react
â—Ź Track input & output data
â—Ź Deal with consequences
â—Ź Communicate results
Keeping logs
17. Modelling risks
● World changes, model can’t catch it
â—Ź Training data not realistic
â—Ź Model does not generalize
â—Ź When to retrain, update, or fallback?
â—Ź When to update test set?
Poor live performance
21. SKIL Python interface
â—Ź End-to-end model lifecycle management
â—Ź Works anywhere, carry on
â—Ź Captures your workflow:
â—‹ Define a workspace for your experiments
â—‹ Create tracked experiments
â—‹ Add models to your experiment
â—‹ Add evaluation metrics
â—‹ Create deployments for your model
â—‹ Deploy models as services
â—‹ Get predictions from services
22. Model deployment example
skil_server = Skil()
work_space = WorkSpace(skil_server)
experiment = Experiment(work_space)
model = Model('model.h5', experiment)
deployment = Deployment(skil_server, "my_deployment")
service = model.deploy(deployment, scale=2)
print(service.predict(test_data))
pip install skil*
24. Conclusion
â—Ź What is your view on enterprise AI?
â—Ź Identify your gap
â—Ź Bridge it accordingly
DS meetup Dec 13th:
live demo of SKIL for object
detection with YOLO