Apple Unveils Open-Source Path for AI Development on Its Silicon Chips

12 months ago |   readers | 3 mins reading
Apple Unveils Open-Source Path for AI Development on Its Silicon Chips

Apple switched to its own silicon computer chips three years ago, moving boldly toward total control of its technology stack. Today,  Apple has launched MLX, an open-source framework specifically tailored to perform machine learning on Apple’s M-series CPUs.

Most AI software development currently takes place on open-source Linux or Microsoft systems, and Apple does not want its thriving developer ecosystem to be left out of the latest big thing.

MLX aims to resolve the longstanding compatibility and performance issues associated with Apple’s unique architecture and software, but it’s more than a simple technical play. MLX provides a user-friendly design, likely drawing inspiration from renowned frameworks like PyTorch, Jax, and ArrayFire. Its introduction promises a more streamlined process for training and deploying AI learning models on Apple devices.

Architecturally, MLX is set apart by its unified memory model, where arrays exist in shared memory, enabling operations across supported device types without requiring data duplication. This feature is crucial for developers seeking flexibility in their AI projects.

In short, unified memory means that your GPU shares its VRAM with the computer’s RAM, so instead of buying a powerful PC and then adding a beefy GPU with a lot of vRAM, you can just use your Mac’s RAM for everything.

However, the road to AI development on Apple Silicon has not been without its challenges, mainly due to its closed ecosystem and lack of compatibility with many open-source development projects and their widely used infrastructure.

“It’s exciting to see more tools like this for working with tensor-like objects, but I really wish Apple would make porting custom models in a high-performance manner easier,” a developer said on Hacker News in a discussion of the announcement.

Up until now, developers had to convert their models to CoreML so they could run on Apple. This reliance on a translator is not ideal. CoreML is focused on converting pre-existing machine learning models and optimizing them for Apple devices. MLX, on the other hand, is about creating and executing machine learning models directly and efficiently on Apple’s own hardware, offering tools for innovation and development within the Apple ecosystem.

MLX has produced good results in benchmark tests. Its compatibility with tools like Stable Diffusion and OpenAI’s Whisper represent a significant step forward. Notably, performance comparisons reveal MLX’s efficiency, with it outperforming the execution of PyTorch in image generation speeds at higher batch sizes.

For instance, Apple reports it takes “about 90 seconds to fully generate 16 images with MLX and 50 diffusion steps with classifier free guidance and about 120 for PyTorch.”

As AI continues to evolve at a rapid pace, MLX represents a critical milestone for Apple’s ecosystem. It not only addresses technical challenges but also opens up new possibilities for AI and machine learning research and development on Apple devices—a strategic move, considering Apple’s divorce from Nvidia and its own robust AI ecosystem.

MLX aims to Apple’s platform a more attractive and feasible option for AI researchers and developers, and means a merrier Christmas for AI-obsessed Apple fans.

This article is originated from the source

Decrypt
Read Full Article
Published on Other News Site
cointelegraph Badgebitcoin Badgecryptonews Badgeu Badgebeincrypto Badgeblockworks Badgecoincodex Badge