BIGGER DATA, SMALLER MACHINE

Supercharge Your AI Training. At Home, In Office, & The Lab.

With GPU Memory Extension by Phison aiDAPTIV+, You Unlock Advanced AI model Performance.

AI Training PC
aiONE Workstation

Designed for Real World AI Creators!

AI Training PC

Optimized for LLM training up to 13 billion parameters and faster inference.

Tech Specs:

  • NVIDIA RTX 5070 TI
  • Intel Core Ultra 7
  • Phison aiDAPTIV+ AI Software
  • Phison 320GB aiDAPTIVCache
  • 64GB RAM
  • 2TB SSD Storage
  • Ubuntu Linux OS
Buy Now

aiONE Workstation

Ultimate desktop machine for local AI fine-tuning up to 100 billion parameters and accelerated inference.

Tech Specs:

  • 4x NVIDIA RTX 6000 Ada
  • Intel Core Xeon W5
  • Phison aiDAPTIV+ AI Software
  • Phison 2TB aiDAPTIVCache
  • 512GB RAM
  • 2TB SSD primary storage
  • 4x 2TB SSD additional storage
  • Ubuntu Linux OS
Buy Now

What is Phison aiDAPTIV+?

Trains 10x larger models than GPU-only systems, Supports RAG + Fine-tuning, & Compatible with PyTorch and NVIDIA NeMo

Extends GPU Memory

Enables larger LLMs on local hardware

Affordable Flash Memory as Cache

Fits your budget

Private. On-Premises AI.

You keep control over your data



Watch & Learn

Video Thumbnail

Already Have an NVIDIA GPU-based Desktop?

Upgrade for More AI Processing Power.

Ai100 Cache SSDs

AI100E aiDAPTIVCache SSDs Contain:

  • aiDAPTIVCache Memory
  • aiDAPTIVLink Memory Management Middleware
  • aiDAPTIVPro Suite all-in-one AI toolkit

320GB M.2

1TB M.2

2TB M.2

1TB U.2

2TB U.2

2024 FMS Best of Show
2025 Taiwan Excellence
2024 IT Matters Awards
Fits Your Budget

Fits Your
Budget

Simple to Use and Deploy

Simple to Use and Deploy

Keeps Data in Your Control

Keeps Data in Your Control

Affordable

Offloads expensive HBM and GDDR memory to cost-effective flash memory. Eliminates the need for large numbers of high-cost and power-hungry GPU cards.

Affordable Graph
Ease of Use

Ease of Use

Easily deploys in your home, office, classroom or data center with a small footprint while using commonplace power & cooling.

Offers command line access or intuitive GUI with all-in-one toolset for model ingest, fine-tuning, and validation and inference.

Control

Enables LLM training behind your firewall. Give you full control over your private data and peace of mind with data sovereignty compliance.

Control Your Data
Teach Yourself LLM Training with an AI Training PC

Teach Yourself LLM Training with an AI Training PC

Provides a cost-effective AI Training PC for individuals and organizations to learn how to fine-tune LLMs beyond just simple inference. Fills shortage of skilled talent to train LLMs locally with your own data.

Download Solution Brief

LLM Training Use Cases

LLM training on-premises enables organizations and individuals to enhance general knowledge models with domain-specific data. This provides better usability, relevance and accuracy for a wide range of specialized fields such as medical diagnostic, financial forecasting, legal analysis and product development.

Download Solution Brief
LLM Training Use Cases

Phison aiDAPTIV+ LLM Training Integrated Solution


Use a Command Line or leverage the intuitive All-in-One aiDAPTIVPro Suite to perform LLM Training

aiDAPTIVPro Suite graph

Supported Models

  • Llama, Llama-2, Llama-3, CodeLlama
  • Vicuna, Falcon, Whisper, Clip Large
  • Metaformer, Resnet, Deit base, Mistral, TAIDE
  • And many more being continually added
aiDAPTIVPro Suite image
and/or
Built-in Memory Management Solution

Built-in Memory Management Solution

Experience seamless PyTorch compliance that eliminates the need to modify your AI application. You can effortlessly add nodes as needed. System vendors have access to AI100E SSD, middleware library licenses, and full Phison support to facilitate smooth system integration.

aiDAPTIV+ BENEFITS

  • Transparent drop-in
  • No need to change your AI Application
  • Reuse existing HW or add nodes

aiDAPTIV+ MIDDLEWARE

  • Slice model, assign to each GPU
  • Hold pending slices on aiDAPTIVCache
  • Swap pending slices w/ finished slices on GPU

FOR SYSTEM INTEGRATORS

  • Access to ai100E SSD
  • Middleware library license
  • Full Phison support to bring up
and

Seamless Integration with GPU Memory

The optimized middleware extends GPU memory by an additional 320GB (for PCs) up to 8TB (for workstations and servers) using aiDAPTIVCache. This added memory is used to support LLM training with low latency. Furthermore, the high endurance feature offers an industry-leading 100 DWPD, utilizing a specialized SSD design with an advanced NAND correction algorithm.

SEAMLESS INTEGRATION

  • Optimized middleware to extends GPU memory capacity
  • 2x 2TB aiDAPTIVCache to support 70B model
  • Low latency

HIGH ENDURANCE

  • Industry-leading 100 DWPD with 5-year warranty
  • SLC NAND with advanced NAND correction algorithm
aiDAPTIV Cache Family

Improves Inference

aiDAPTIV+ enhances the inferencing experience by accelerating Time To First Token recall for faster responses. Furthermore, it extends the token length which provides greater context for lengthier and more accurate answers.

Improve Inference graph

Unlocks Training of Larger Model Sizes

Unlocks Training of Larger Model Sizes

No longer limit your model size fine-tuning due to the HBM or GDDR memory capacity on your GPU card. aiDAPTIV+ expands the memory footprint by intelligently incorporating flash memory and DRAM into a larger memory pool.

This enables larger training models, giving you the opportunity to affordably run workloads previously reserved for the largest corporations and cloud service providers.

Offers expire 3/31/2026 at 11:59 P.M. PT.