This week at PyTorch Conference EU in Paris, the PyTorch Foundation announced a trio of new projects joining its portfolio: Safetensors, ExecuTorch, and Helion.
Under the Linux Foundation, the PyTorch Foundation is a community-driven hub supporting the open source PyTorch framework, plus a broader portfolio of open source AI projects, like DeepSeed, Ray, and vLLM.
Together, these projects provide vendor-neutral infrastructure for the entire AI lifecycle, from training through inference. In bringing Safetensors, ExecuTorch, and Helion into the fold, the foundation strengthens its position as the vendor-neutral hub for open source AI.
Safetensors brings secure model distribution
On Tuesday, Safetensors took center stage in PyTorch Foundation news as the newest foundation-hosted project.
It was Hugging Face, the open-source AI platform, that developed Safetensors in 2022 and has since maintained it, watching it grow into one of the most widely used tensor serialization formats in the open source machine learning ecosystem. Now a part of the PyTorch fold, Safetensors will help enable secure model distribution to minimize security risks associated with model architectures and execution.
With developers working on new AI models at breakneck speeds, security risks are also rapidly proliferating, making Safetensors a timely addition to the PyTorch Foundation’s portfolio — and a win for the industry at large.
Unlike other formats (like pickle) that allow (potentially nefarious) developers to execute untested code in model files, Safetensors serves as a sort of “table of contents” for AI model data, preventing arbitrary code execution and thus improving safety during model-sharing.
With developers working on new AI models at breakneck speeds, security risks are also rapidly proliferating, making Safetensors a timely addition to the PyTorch Foundation’s portfolio — and a win for the industry at large. In the foundation’s announcement, executive director Mark Collier called Safetensors’contribution “an important step towards scaling production-grade AI models.”
From ExecuTorch, greater on-demand inference capabilities
Also on Tuesday, the PyTorch Foundation welcomed ExecuTorch as a PyTorch Core project.
First introduced publicly at a PyTorch Conference in 2023, ExecuTorch began under Meta with the aim to simplify running PyTorch models on edge and on-device environments (e.g., mobile phones, AR/VR headsets, etc.).
Specifically, as stated in the PyTorch Foundation’s announcement, ExecuTorch was designed with four core principles in mind: 1) end-to-end developer experience; 2) portability across hardware; 3) small, modular, and efficient; 4) open by default.
In the last couple years, the runtime has moved from being an internal tool to an open platform for on-device AI. It now not only supports model deployment for Meta products but has found its place among a larger audience, helping developers productionize PyTorch-based models on edge devices, including for AR/VR experiences, computer vision and sensor processing at the edge, and generative AI and LLM-based assistants on devices.
Now becoming a PyTorch core project under the PyTorch Foundation, ExecuTorch will extend PyTorch functionality for efficient AI inference on edge devices. In joining the foundation, it will also get to take advantage of its vendor-neutral governance, open source structure, and clear IP, trademark, and branding (Meta will remain a major contributor but will bear no independent control over the project).
Helion standardizes AI kernel development
Helion also joined the PyTorch fold on Tuesday, adding to the foundation’s list of open source AI projects
A Python-embedded domain-specific language (DSL) for authoring machine learning kernels, Helion comes to the PyTorch Foundation with the goal of simplifying kernel development across the open AI ecosystem.
Specifically, as outlined in the PyTorch Foundation’s announcement, it aims to “raise the level of abstraction compared to kernel languages, making it easier to write efficient kernels while enabling more automation in the autotuning process.”
Like Safetensors’ and ExecuTorch’s arrival, Helion’s entry into the PyTorch Foundation’s portfolio comes at a good time. The AI era is starting to shift from primarily training models to running at-scale inference — and with this shift comes demands for higher-level performance portability across diverse hardware.
By arming developers with higher-level abstraction, along with ahead-of-time autotuning, Helion should make it easier to write high-performance, hardware-portable machine learning kernels.
Expanding the open source AI stack
As the AI industry begins to shift from training models to deploying and scaling production, it raises new questions about security, performance, and portability. By bringing Safetensors, ExecuTorch, Helion under its umbrella, the PyTorch Foundation not only grows its portfolio of projects but strengthens the entire open source AI stack.
YOUTUBE.COM/THENEWSTACK
Tech moves fast, don’t miss an episode. Subscribe to our YouTube
channel to stream all our podcasts, interviews, demos, and more.
SUBSCRIBE
Group
Created with Sketch.
Meredith Shubel is a technical writer covering cloud infrastructure and enterprise software. She has contributed to The New Stack since 2022, profiling startups and exploring how organizations adopt emerging technologies. Beyond The New Stack, she ghostwrites white papers, executive bylines,…
Read more from Meredith Shubel
