We may be stuck at home but that didn’t stop over 2000 enthusiastic AI, Machine Learning & Edge Computing pioneers from joining us for the AI Accelerate Festival APAC, Dec 1-3, 2020.

We had some awesome presentations from different international companies but the fun didn’t stop there as our Slack channel was alive with Q&As.

So, to keep the good times rollin’, we’ve pulled out some of the top questions and answers to share with you. Check ‘em out below…. And if you’d like to continue the conversation be sure to check out the #ai-events channel on the AI Accelerator Institute Slack.

Muni Vinay Kamisetty, SVP, Regional Head of Data and BI, Enterprise Intelligence of Lazada, kicked off the festival with the presentation “MLOps - Hardware considerations and implementation on edge devices”.

Q: At the end you talked about a use case with drone delivery where new models are constantly being trained and deployed to the drone fleet in response to new data. How do you test a new model before you deploy it to ensure it is safe? Do you deploy new models only to a fraction of your edge devices first?

Yes, as part of the MLOPs training in the command center, the AutoML can be leveraged to automate the various training and goes through a workflow - based on the validation results (F1's and recalls - the model will be pushed into the edge). Also human augmentation (notification center is helpful), in this case they might give manual commands (during fire) and bring the drone back. We are also exploring using simulators (alibaba custom-built similar to zephyr) to try the model in a simulator, before pushing into an edge device.

Q: How is Security Severity taken care of in on-edge devices based on AI/ML Computing?

There are two things to do here: 1) Involve your cybersecurity team in the MLOPs or MLEdgeOps from the beginning. 2) Particularly on the edge devices we follow the SASE model (Secure Access Service Edge) - basically automates and integrates network security functions, secure IOT gateways, CASB, edge firewalls, zero trust network access etc., in combination to achieve this feat. Thanks to Alicloud - we leverage their Edge Node Services (ENS) where something like DDOS detection happens in real-time.

Bhanu Prakash Padiri, Head of Advanced Engineering, ADAS R&D, Continental, discussed how Autonomous Driving is achieved and how human-centered AI is helping to better the performance.

Q: You mentioned that we will need to train self-driving systems to make ethical choices. Do you see a good way to achieve this? Most of the ethical dilemmas that are brought up seem like extreme edge cases, how can we account for scenarios that we can't foresee?

Currently, it's a challenge to program ethics into AI. This is where crowd intelligence can help take better decisions and we converge to the right decisions. But the challenge is how to do it in real-time. It’s a current area of research. But slowly we will be there to handle such cases.

Sabrina Ren, Director, Tech Scouting & Open Innovation, OPPO later came on stage and presented her knowledge & experiences on “Taking AI to the edge: Challenges & insight from device perspective”.

Q: Do you feel any chipset is better than another on Oppo smartphones for most of the applications you discussed? How would you compare the chipset from Qualcomm, Mediatek and/or others?

The new direction of our research efforts is to make on-device AI which can run on multiple platforms/ hardware agnostic, and to bring the unique experience to mass-market consumers -  e.g context-awareness. AI needs to run well even on smaller form factors such as watches or earphones, but certain AI APP will perform better on high-end platforms.
The few cases presented are running on QCOM, however, we do have plans to migrate the model to other SoC platforms. At the end of the day, the challenge we post to the on-device AI research team is to optimize the model & performance for any platform. E.g. we have wearable product on-device AI which has even less hardware computing & power & memory budget compared with phones.

Ashutosh Agrawal, Director of AI Product, Bosch, enlightened us on a “data loop” strategy, a generalized overview of how AI-based tools & processes enable the data loop strategy for large scale product development with a synopsis of autonomous driving domain.

Q: In your presentation, operation vacation from Tesla is an interesting point in autonomous driving and AI, could you kindly elaborate more on this?

Sure. The main idea behind the operation vacation concept is to save the wait time of expensive deep learning engineers. The wait could be due to the data curation team, validation team & other teams in general. The idea is to automate the process as much as possible so that everything works on its own for several iterations and make improvements in the model accuracies. And for such a thing to work, you need a framework or platform approach with tons of plug-ins and integrated pipelines.

One of our closing sessions came from Jianming Chen, NCS Solution Architect, AI, Nokia, who shared the evolution of the AI infrastructure, toward quantum computing.

Q: How do you see the potential of Quantum Computing to advance the EdgeAI-IoT Applications?

I think quantum computing will be suitable for extremely fast and sophisticated decision-making. Quantum computing can combine with Edge-IoT applications by supporting service APIs to solve these types of problems, such as decision-making for autonomous vehicles, or optimization for therapy.
In medical therapy, quantum computing can be used to design custom pharmaceuticals that minimize side-effects. In autonomous driving, the decision to control the vehicle shall be finished in milliseconds, quantum computing is the potential technology to solve these kinds of problems.

Q: As AI needs more processing power so now how quantum technology can contribute to area, power consumption and performance of VLSI chips?

Quantum technology leads to significant advantages in power consumption and performance compared to VLSI chips. Manipulation of qubits costs less power. However, quantum computers in current days spend more power to stabilize the qubits.
For the performance, QC outperforms VLSI in some of the problems, especially for optimization problems with a great amount of variables. QC can solve these problems merely instantly, while classical cannot. For the area issue, QC doesn't have an advantage over VLSI.  The fabrication of QPU is still in its infancy.

If you made it this far, thanks for reading! And as mentioned be sure to check out the AI Accelerator Institute Slack to engage with like-minded AI and machine learning people and keep these conversations going. It is our mission to help develop the next generation of machine intelligence specifically focusing on accelerating AI at the edge.