The full 200+ model catalog in two formats:
- Markdown (
models_catalog.md) → human-readable table. - JSON (
models_catalog.json) → machine-readable, can be used in scripts.
Example: Markdown (models_catalog.md)
Here’s a compressed, blog-friendly version of your
catalog section. Instead of a long table with
hundreds of rows, we summarize by family + key
variants, while still keeping it informative
and scannable:
AI Model Catalog (200+ Models)
Here’s a quick snapshot of popular open-source models
and their key variants:
Family
Sizes Available
Types / Variants
LLaMA 2
7B, 13B, 70B
Base, Chat, Quantized (GGUF)
Mistral
7B
Base, Instruct, Quantized
Falcon
7B, 40B, 180B
Base, Instruct, Quantized
GPT-NeoX / Pythia
1B – 20B
Base, Chat
StableLM
3B, 7B
Base, Tuned
Gemma
2B, 7B
Instruction-tuned
Mixtral (MoE)
8x7B
Sparse Mixture of Experts
Others
Many (200+)
Hugging Face hosts wide variations (fine-tuned, distilled, quantized)
👉 Instead of listing all 200+ IDs,
this summary groups
models by family, size, and type —
making it easy to scan.
Perfect ✅ Here’s a collapsible/accordion
style blog section you can use.
It keeps things neat but lets readers
expand if they want details.
Works well in Markdown with HTML
(supported by most blog platforms
like WordPress, Ghost, Medium with HTML blocks).
📚 AI Model Catalog (200+ Models)
Here’s a grouped overview of popular AI models.
Click to expand each family:
LLaMA 2 (7B, 13B, 70B)
Model ID
Parameters
Type
meta-llama/Llama-2-7b-hf
7B
Base
meta-llama/Llama-2-7b-chat-hf
7B
Chat
meta-llama/Llama-2-13b-hf
13B
Base
meta-llama/Llama-2-13b-chat-hf
13B
Chat
meta-llama/Llama-2-70b-hf
70B
Base
meta-llama/Llama-2-70b-chat-hf
70B
Chat
TheBloke/Llama-2-7B-Chat-GGUF
7B
Quantized
TheBloke/Llama-2-13B-Chat-GGUF
13B
Quantized
TheBloke/Llama-2-70B-Chat-GGUF
70B
Quantized
Mistral (7B)
Model ID
Parameters
Type
mistralai/Mistral-7B-v0.1
7B
Base
mistralai/Mistral-7B-Instruct-v0.1
7B
Instruct
TheBloke/Mistral-7B-Instruct-GGUF
7B
Quantized
Falcon (7B, 40B, 180B)
Model ID
Parameters
Type
tiiuae/falcon-7b
7B
Base
tiiuae/falcon-7b-instruct
7B
Instruct
tiiuae/falcon-40b
40B
Base
tiiuae/falcon-40b-instruct
40B
Instruct
tiiuae/falcon-180b
180B
Base
StableLM (3B, 7B)
Model ID
Parameters
Type
stabilityai/stablelm-3b-4e1t
3B
Base
stabilityai/stablelm-7b
7B
Base
stabilityai/stablelm-7b-tuned
7B
Tuned
Gemma (2B, 7B)
Model ID
Parameters
Type
google/gemma-2b
2B
Instruction-tuned
google/gemma-7b
7B
Instruction-tuned
Mixtral (MoE 8x7B)
Model ID
Parameters
Type
mistralai/Mixtral-8x7B-v0.1
8×7B
Sparse MoE
TheBloke/Mixtral-8x7B-GGUF
8×7B
Quantized
Here’s a collapsible/accordion style blog section
you can use. It keeps things neat but lets readers
expand if they want details. Works well in Markdown
with HTML (supported by most blog platforms
like WordPress, Ghost, Medium with HTML blocks).
📚 AI Model Catalog (200+ Models)
Here’s a grouped overview of popular AI models.
Click to expand each family:
LLaMA 2 (7B, 13B, 70B)
Model ID
Parameters
Type
meta-llama/Llama-2-7b-hf
7B
Base
meta-llama/Llama-2-7b-chat-hf
7B
Chat
meta-llama/Llama-2-13b-hf
13B
Base
meta-llama/Llama-2-13b-chat-hf
13B
Chat
meta-llama/Llama-2-70b-hf
70B
Base
meta-llama/Llama-2-70b-chat-hf
70B
Chat
TheBloke/Llama-2-7B-Chat-GGUF
7B
Quantized
TheBloke/Llama-2-13B-Chat-GGUF
13B
Quantized
TheBloke/Llama-2-70B-Chat-GGUF
70B
Quantized
Mistral (7B)
Model ID
Parameters
Type
mistralai/Mistral-7B-v0.1
7B
Base
mistralai/Mistral-7B-Instruct-v0.1
7B
Instruct
TheBloke/Mistral-7B-Instruct-GGUF
7B
Quantized
Falcon (7B, 40B, 180B)
Model ID
Parameters
Type
tiiuae/falcon-7b
7B
Base
tiiuae/falcon-7b-instruct
7B
Instruct
tiiuae/falcon-40b
40B
Base
tiiuae/falcon-40b-instruct
40B
Instruct
tiiuae/falcon-180b
180B
Base
StableLM (3B, 7B)
Model ID
Parameters
Type
stabilityai/stablelm-3b-4e1t
3B
Base
stabilityai/stablelm-7b
7B
Base
stabilityai/stablelm-7b-tuned
7B
Tuned
Gemma (2B, 7B)
Model ID
Parameters
Type
google/gemma-2b
2B
Instruction-tuned
google/gemma-7b
7B
Instruction-tuned
Mixtral (MoE 8x7B)
Model ID
Parameters
Type
mistralai/Mixtral-8x7B-v0.1
8×7B
Sparse MoE
TheBloke/Mixtral-8x7B-GGUF
8×7B
Quantized
👉 This way, your blog stays clean but still
offers full detail for readers
who want to dive deep.
Example: JSON (models_catalog.json)
[
{
"model_id": "meta-llama/Llama-2-7b-hf",
"family": "LLaMA 2",
"parameters": "7B",
"type": "Base"
},
{
"model_id": "meta-llama/Llama-2-7b-chat-hf",
"family": "LLaMA 2",
"parameters": "7B",
"type": "Chat"
},
{
"model_id": "meta-llama/Llama-2-13b-hf",
"family": "LLaMA 2",
"parameters": "13B",
"type": "Base"
},
{
"model_id": "meta-llama/Llama-2-13b-chat-hf",
"family": "LLaMA 2",
"parameters": "13B",
"type": "Chat"
}
]
⚡ Since the full catalog will be very large (200+ rows), I can only give you both models_catalog.md and models_catalog.json files here.



