TinkerToy Computer invented by Daniel Hillis and Brian Silverman
Tinker is a training API for researchers
Control every aspect of model training and fine-tuning while we handle the infrastructure.
Your ideas in four functions
forward_backward
Performs a forward pass and a backward pass, accumulating the gradient.
optim_step
Updates the weights based on the accumulated gradient.
sample
Generate tokens for interaction, evaluation, or RL actions.
save_state
Save training progress for resumption.
Supported models
Tinker uses LoRA
LoRA fine-tunes models by training a small add-on instead of changing all the original weights.
Read blog postFAQs
Sign up for our waitlist here. If you're a university or organization looking for wide scale access, contact tinker@thinkingmachines.ai.
Tinker is a flexible API for efficiently fine-tuning open source models with LoRA. It's designed for researchers and developers who want flexibility and full control of their data and algorithms without worrying about infrastructure management.
LoRA is an efficient approach to fine-tuning that trains a streamlined adapter instead of updating all base model weights. Our research demonstrates that with the right setup, LoRA matches the learning performance of full fine-tuning while providing more flexibility and requiring less compute.
Tinker handles scheduling, tuning, resource management, and infrastructure reliability so you can focus on the training data and algorithms. Behind the scenes, Tinker orchestrates distributed training on powerful GPU clusters for efficient utilization.
A dataset of supervised learning examples or reinforcement learning environments. After picking a base model to train on, the Tinker API provides simple functions to compute gradients, update the weights, and sample outputs from the trained model. See our cookbook for examples to get started.
Tinker is currently available for a broad selection of open-source models, ranging from compact models like Llama-3.2-1B to large MoEs like Qwen3-235B-A22B-Instruct. We plan to expand our model lineup with even more choices soon.
You can download model weights throughout and following training.
Tinker will be free to start. We will introduce usage-based pricing in the coming weeks.