The post Together AI Enhances Fine-Tuning Platform with Larger Models and Hugging Face Integration appeared on BitcoinEthereumNews.com. Lawrence Jengar Sep 10, 2025 19:13 Together AI unveils major upgrades to its Fine-Tuning Platform, including support for 100B+ parameter models, extended context lengths, and improved integration with Hugging Face Hub. Together AI has announced significant upgrades to its Fine-Tuning Platform, aiming to streamline the model customization process for AI developers. The latest enhancements include the ability to train models with over 100 billion parameters, extended context lengths, and enhanced integration with the Hugging Face Hub, according to Together AI. Expanding Model Capacity The platform now supports a range of new large models, such as DeepSeek-R1, Qwen3-235B, and Llama 4 Maverick. These models are designed to perform complex tasks, sometimes rivaling proprietary models. The platform’s engineering optimizations allow for efficient training of these large-scale models, reducing both costs and time investments. Longer Context Lengths Responding to the growing need for long-context processing, Together AI has overhauled its training systems to support increased context lengths. Developers can now utilize context lengths of up to 131k tokens for certain models, enhancing the platform’s capability to handle complex and lengthy data inputs. Integration with Hugging Face Hub The integration with Hugging Face Hub allows developers to fine-tune a wide array of models hosted on the platform. This feature enables users to start with a pre-adapted model and further customize it for specific tasks. Additionally, outputs from training runs can be directly saved into a repository on the Hub, facilitating seamless model management. Advanced Training Objectives Together AI has also expanded its support for Preference Optimization with new training objectives, such as length-normalized DPO and SimPO, offering more flexibility in training on preference data. The platform now supports the maximum batch size setting, optimizing the training process across different models and modes. These enhancements are part… The post Together AI Enhances Fine-Tuning Platform with Larger Models and Hugging Face Integration appeared on BitcoinEthereumNews.com. Lawrence Jengar Sep 10, 2025 19:13 Together AI unveils major upgrades to its Fine-Tuning Platform, including support for 100B+ parameter models, extended context lengths, and improved integration with Hugging Face Hub. Together AI has announced significant upgrades to its Fine-Tuning Platform, aiming to streamline the model customization process for AI developers. The latest enhancements include the ability to train models with over 100 billion parameters, extended context lengths, and enhanced integration with the Hugging Face Hub, according to Together AI. Expanding Model Capacity The platform now supports a range of new large models, such as DeepSeek-R1, Qwen3-235B, and Llama 4 Maverick. These models are designed to perform complex tasks, sometimes rivaling proprietary models. The platform’s engineering optimizations allow for efficient training of these large-scale models, reducing both costs and time investments. Longer Context Lengths Responding to the growing need for long-context processing, Together AI has overhauled its training systems to support increased context lengths. Developers can now utilize context lengths of up to 131k tokens for certain models, enhancing the platform’s capability to handle complex and lengthy data inputs. Integration with Hugging Face Hub The integration with Hugging Face Hub allows developers to fine-tune a wide array of models hosted on the platform. This feature enables users to start with a pre-adapted model and further customize it for specific tasks. Additionally, outputs from training runs can be directly saved into a repository on the Hub, facilitating seamless model management. Advanced Training Objectives Together AI has also expanded its support for Preference Optimization with new training objectives, such as length-normalized DPO and SimPO, offering more flexibility in training on preference data. The platform now supports the maximum batch size setting, optimizing the training process across different models and modes. These enhancements are part…

Together AI Enhances Fine-Tuning Platform with Larger Models and Hugging Face Integration

2025/09/11 23:45
2 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.


Lawrence Jengar
Sep 10, 2025 19:13

Together AI unveils major upgrades to its Fine-Tuning Platform, including support for 100B+ parameter models, extended context lengths, and improved integration with Hugging Face Hub.





Together AI has announced significant upgrades to its Fine-Tuning Platform, aiming to streamline the model customization process for AI developers. The latest enhancements include the ability to train models with over 100 billion parameters, extended context lengths, and enhanced integration with the Hugging Face Hub, according to Together AI.

Expanding Model Capacity

The platform now supports a range of new large models, such as DeepSeek-R1, Qwen3-235B, and Llama 4 Maverick. These models are designed to perform complex tasks, sometimes rivaling proprietary models. The platform’s engineering optimizations allow for efficient training of these large-scale models, reducing both costs and time investments.

Longer Context Lengths

Responding to the growing need for long-context processing, Together AI has overhauled its training systems to support increased context lengths. Developers can now utilize context lengths of up to 131k tokens for certain models, enhancing the platform’s capability to handle complex and lengthy data inputs.

Integration with Hugging Face Hub

The integration with Hugging Face Hub allows developers to fine-tune a wide array of models hosted on the platform. This feature enables users to start with a pre-adapted model and further customize it for specific tasks. Additionally, outputs from training runs can be directly saved into a repository on the Hub, facilitating seamless model management.

Advanced Training Objectives

Together AI has also expanded its support for Preference Optimization with new training objectives, such as length-normalized DPO and SimPO, offering more flexibility in training on preference data. The platform now supports the maximum batch size setting, optimizing the training process across different models and modes.

These enhancements are part of Together AI’s commitment to provide cutting-edge tools for AI researchers and engineers. With these new features, the Fine-Tuning Platform is positioned to support even the most demanding AI development tasks, making it a cornerstone for innovation in machine learning.

Image source: Shutterstock


Source: https://blockchain.news/news/together-ai-enhances-fine-tuning-platform-larger-models

Opportunità di mercato
Logo Moonveil
Valore Moonveil (MORE)
$0.00004018
$0.00004018$0.00004018
+2.52%
USD
Grafico dei prezzi in tempo reale di Moonveil (MORE)
Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!