Loading developer profiles...
Find every tracked developer and jump straight to their models and hardware guidance.
Qwen
57 modelsQwen publishes models from 3B to 235B parameters focused on workstation-grade inference.
meta-llama
20 modelsmeta-llama publishes models from 1B to 71B parameters focused on workstation-grade inference.
deepseek-ai
16 modelsdeepseek-ai publishes models from 3B to 685B parameters focused on workstation-grade inference.
microsoft
13 modelsmicrosoft publishes models from 4B to 14B parameters focused on edge-friendly quantizations.
lmstudio-community
9 modelslmstudio-community publishes models from 4B to 30B parameters focused on workstation-grade inference.
unsloth
9 modelsunsloth publishes models from 1B to 32B parameters focused on edge-friendly quantizations.
mistralai
8 modelsmistralai publishes models from 7B to 675B parameters focused on high-throughput coders.
google publishes models from 0B to 27B parameters focused on lightweight edge deployment.
openai-community
4 modelsopenai-community publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
ibm-granite
3 modelsibm-granite publishes models from 2B to 8B parameters focused on edge-friendly quantizations.
MiniMaxAI
3 modelsMiniMaxAI publishes models from 7B to 456B parameters focused on workstation-grade inference.
NousResearch
3 modelsNousResearch publishes models from 8B to 71B parameters focused on workstation-grade inference.
openai
3 modelsopenai publishes models from 20B to 120B parameters focused on workstation-grade inference.
RedHatAI
3 modelsRedHatAI publishes models from 70B to 90B parameters focused on workstation-grade inference.
trl-internal-testing
3 modelstrl-internal-testing publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
allenai
2 modelsallenai publishes models from 1B to 7B parameters focused on edge-friendly quantizations.
black-forest-labs
2 modelsblack-forest-labs publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
EleutherAI
2 modelsEleutherAI publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
facebook publishes models from 1B to 7B parameters focused on edge-friendly quantizations.
GSAI-ML
2 modelsGSAI-ML publishes models from 8B to 8B parameters focused on edge-friendly quantizations.
HuggingFaceTB
2 modelsHuggingFaceTB publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
moonshotai
2 modelsmoonshotai publishes models from 49B to 1000B parameters focused on workstation-grade inference.
nvidia
2 modelsnvidia publishes models from 9B to 71B parameters focused on workstation-grade inference.
tencent
2 modelstencent publishes models from 1B to 7B parameters focused on edge-friendly quantizations.
zai-org
2 modelszai-org publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
01-ai
1 models01-ai publishes models from 34B to 34B parameters focused on workstation-grade inference.
ai-forever
1 modelsai-forever publishes models from 13B to 13B parameters focused on edge-friendly quantizations.
AI-MO
1 modelsAI-MO publishes models from 72B to 72B parameters focused on workstation-grade inference.
Alibaba-NLP
1 modelsAlibaba-NLP publishes models from 5B to 5B parameters focused on edge-friendly quantizations.
apple
1 modelsapple publishes models from 1B to 1B parameters focused on edge-friendly quantizations.
baichuan-inc
1 modelsbaichuan-inc publishes models from 32B to 32B parameters focused on workstation-grade inference.
bigcode
1 modelsbigcode publishes models from 3B to 3B parameters focused on edge-friendly quantizations.
bigscience
1 modelsbigscience publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
BSC-LT
1 modelsBSC-LT publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
codellama
1 modelscodellama publishes models from 34B to 34B parameters focused on workstation-grade inference.
context-labs
1 modelscontext-labs publishes models from 3B to 3B parameters focused on edge-friendly quantizations.
dicta-il
1 modelsdicta-il publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
distilbert
1 modelsdistilbert publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
dphn
1 modelsdphn publishes models from 34B to 34B parameters focused on workstation-grade inference.
EssentialAI
1 modelsEssentialAI publishes models from 8B to 8B parameters focused on edge-friendly quantizations.
Gensyn
1 modelsGensyn publishes models from 5B to 5B parameters focused on edge-friendly quantizations.
google-bert
1 modelsgoogle-bert publishes models from 0B to 0B parameters focused on edge-friendly quantizations.
google-t5
1 modelsgoogle-t5 publishes models from 3B to 3B parameters focused on edge-friendly quantizations.
hmellor
1 modelshmellor publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
HuggingFaceH4
1 modelsHuggingFaceH4 publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
HuggingFaceM4
1 modelsHuggingFaceM4 publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
huggyllama
1 modelshuggyllama publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
ibm-research
1 modelsibm-research publishes models from 3B to 3B parameters focused on edge-friendly quantizations.
IlyaGusev
1 modelsIlyaGusev publishes models from 8B to 8B parameters focused on edge-friendly quantizations.
inference-net
1 modelsinference-net publishes models from 3B to 3B parameters focused on edge-friendly quantizations.
kaitchup
1 modelskaitchup publishes models from 4B to 4B parameters focused on edge-friendly quantizations.
LiquidAI
1 modelsLiquidAI publishes models from 2B to 2B parameters focused on edge-friendly quantizations.
liuhaotian
1 modelsliuhaotian publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
llamafactory
1 modelsllamafactory publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
lmsys
1 modelslmsys publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
mlx-community
1 modelsmlx-community publishes models from 20B to 20B parameters focused on workstation-grade inference.
nari-labs
1 modelsnari-labs publishes models from 2B to 2B parameters focused on edge-friendly quantizations.
numind
1 modelsnumind publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
OpenPipe
1 modelsOpenPipe publishes models from 14B to 14B parameters focused on edge-friendly quantizations.
parler-tts
1 modelsparler-tts publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
petals-team
1 modelspetals-team publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
rednote-hilab
1 modelsrednote-hilab publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
rinna
1 modelsrinna publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
skt
1 modelsskt publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
sshleifer
1 modelssshleifer publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
swiss-ai
1 modelsswiss-ai publishes models from 8B to 8B parameters focused on edge-friendly quantizations.
TinyLlama
1 modelsTinyLlama publishes models from 1B to 1B parameters focused on edge-friendly quantizations.
Tongyi-MAI
1 modelsTongyi-MAI publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
vikhyatk
1 modelsvikhyatk publishes models from 7B to 7B parameters focused on edge-friendly quantizations.
WeiboAI
1 modelsWeiboAI publishes models from 2B to 2B parameters focused on edge-friendly quantizations.