Join the people building and operating open infrastructure at the inaugural Open Infrastructure Summit. The Summit schedule features over 300 sessions organized by use cases including: artificial intelligence and machine learning, continuous integration and deployment, containers, edge computing, network functions virtualization, security and public, private and multi-cloud strategies.
In this post we’re highlighting some of the sessions you’ll want to add to your schedule about edge computing. Check out all the sessions, workshops and lightning talks focusing on this topic here.
Getting a neural network up and running with OpenLab
Access to hardware for AI/ML for the every day developer wanting to explore this field can be difficult to obtain and maintain for even the most rudimentary applications and testing. Needing to go beyond a single development machine running locally only increases this obviously. OpenLab is curated infrastructure accessible to open source projects and individuals working within and on open source projects designed to help address this use case. With access to GPU, FPGA, IoT, and more, HPC, AI/ML, Deep Learning, or other testing and applications can be quickly explored. In this beginner-level presentation Huawei’s Melvin Hillsman will walk through getting an account with OpenLab, obtaining resources and getting a simple neural network up and running with an application that should bring back great childhood memories. Details here.
Facial recognition in five minutes with OpenStack
In this session, Red Hat’s Erwan Gallen and Sylvain Bauza will demonstrate how a facial recognition program paired with graphics processing units (GPUs) can maximize the speed and accuracy of facial recognition as well as how to tune the frame decoding and inference time. They’ll also show application developers how to set up face detection networks quickly and choose the correct neural network design option while helping operators will learn how to provide resources such as GPUs and virtual GPUs. Details here.
HPC using OpenStack
High-performance computing (HPC) proposes to design a super computer around the use of parallel processing to run advanced application programs efficiently, reliably and quickly. OpenStack is a set of software tools for building and managing cloud computing platforms for public and private clouds.
If you’re looking for common deployment models for HPC & OpenStack, come and interact with the community in this recurring panel. This session will be an opportunity for architects and operators pairing HPC with OpenStack to get together and discuss best practices and common deployment models, pain points, war stories and wish lists. Check out the Etherpad for pre-panel questions here. Details on the session here.
Accessible ML: Combining open source and open data
If you think that only big tech companies or PhD scientists can use ML & AI, this session aims to show you that an individual open-source enthusiast can build and train a model on commodity hardware using open data – and then scale that up on a public cloud. If you’re a gamer and a python developer, you might already have all the tools you need!
* fast.ai, an easy-to-learn Python ML framework
* nvidia-docker on an Ubuntu Gaming PC
* Public-domain GIS imagery
* A couple terabytes of storage space and a fast internet connection
This talk grew out of the Firewise project, which Aeva van der Veen helped bootstrap last year. The project aimed to use public-domain satellite imagery to help predict and prevent forest fires. Even though the founders chose not to pursue this as a business, it’s an excellent example of how easily open source and public data can be combined to benefit society. Details here.
See you at the Open Infrastructure Summit in Denver, April 29-May 1! Register here.
- Digital Sovereignty – Why Open Infrastructure Matters - December 18, 2020
- OpenStack in Production and Integration with Ceph: A European Weather Cloud User Story - December 2, 2020
- #OpenInfraSummit Track: Public Cloud - October 12, 2020