Apple has introduced MLX, its machine learning (ML) framework tailored for Apple Silicon computers, such as M1, M2, and M3 series chips. This open-source framework, available on GitHub, simplifies the training and execution of ML models directly on Apple Silicon devices, eliminating the need for translators that were previously used with CoreML. MLX boasts a unified memory model, and its design aligns with popular frameworks like ArrayFire, Jax, NumPy, and PyTorch. It supports a C++ API and a Python API based on NumPy, and users can leverage higher-level packages for building and running complex models.
Apple showcased MLX in action, demonstrating its speed advantages over PyTorch in tasks like image generation using Stable Diffusion on Apple Silicon hardware. Tests conducted on an M2 Ultra chip revealed that MLX could generate 16 images in 90 seconds, outperforming PyTorch, which took around 120 seconds for the same task. The framework supports various applications, including text generation using Meta’s LLaMA language model and Mistral large language model. ML and AI researchers can employ OpenAI’s Whisper tool for speech recognition models on their computers through MLX. This release is expected to enhance ML research and development on Apple hardware, facilitating the creation of more efficient on-device ML features for user computers.
Read the full article here https://www.gadgets360.com/laptops/news/apple-silicon-mlx-framework-open-source-efficient-machine-learning-4645755