Decentralized Machine Learning

mlx_logo_455x227

Our Decentralized Machine Learning Solution

Systems Element

Individualized machine learning
for every customer

Decentral Element

Models learn from each other
without exchange of raw data

Integration Element

Easy integration
into existing infrastructure

Model exchange instead of data exchange

mlx enables on-the-edge machine learning over cross-company systems through a paradigm shift in data science

Machine_Schema_AI_4_LD
Data stays with its owner

mlx enables you to coordinate training, deployment, and management of ML models easily on the customer’s shop floor. There is no need to transfer sensitive customer data. Quickly upgrade industrial PCs in the local environments with minimal effort using mlx.

Adapt to diverse customer requirements

Industrial customers often vary through their local requirements: Machines are used in different settings or processes. With mlx, models can be tailored to these specifics. 

Asynchronous communication

With mlx, your ML-based feature or service is unaffected by machines being temporarily offline. ML Models are deployed on-edge and will run autonomously. Further, machines have an outbound communication and connect with the cloud when they are back online.

Develop the models in the local environment – without the need to centralize data.

mlx features

Leverage powerful ML models built on Federated Learning for best-in-class results. mlx automatically evolves ML models across decentrally stored data (on-premise, private cloud, ..).

Compare the evaluation results of your deployed ML models and centrally monitor the performance metrics in real time.

featured

Status, validity and exchange of the local ML models can be managed across all instances.

Easily set up ML models in the local environment via Docker and remotely deploy the leaf node in minutes. Afterwards, observe the leaf's hardware and performance in the central mlx dashboard.

featured

Status, validity and exchange of the local ML models can be managed across all instances.

Easily set up ML models in the local environment via Docker and remotely deploy the leaf node in minutes. Afterwards, observe the leaf's hardware and performance in the central mlx dashboard.

featured

Organize and manage your customized ML jobs: Integrate tailored preprocessing steps, create or utilize existing ML models and put the pipeline into action within a few clicks.

Keep track of changes in your local data and retrain models in the local environments automatically when the changes are significant. Handle and manage data schemes directly from the dashboard.

Seamlessly integrate FedML into your existing ecosystems and make use of Keras libraries by embedding them in mlx with a few code snippets only

featured
Flexible integration with existing systems Flexible integration with existing systems

Flexible integration with existing systems

Dockerized modules with REST APIs allow for easy integration in existing ecosystems.

mlx strives to integrate all of the common frameworks to work locally and in a decentralized manner.

,,

“Organizations that want to share data, but are concerned about privacy, should explore a federated learning approach. This allows data to be shared yet not revealed across organizations. […] There is a small yet growing list of vendors using various approaches in that space, including […] prenode.” (Gartner, 2019)

Request a demo