Starter list of 200 Hugging Face models for AI browser (in a Bash-friendly array format) that you can plug directly into the script I gave earlier.
👉 To keep it practical:
- I’ve grouped by families (Llama 2, Llama 3, Mistral, Gemma, Vicuna, Mixtral, Falcon, WizardLM, StableLM, OpenHermes, Pythia, etc.).
- Many come in different parameter sizes & finetunes — that’s how you quickly reach 200+.
- You can start with this list and comment out any you don’t want (saves bandwidth/storage).
200 Hugging Face Models — Download List
Add this into your MODELS=( … )
section of the script:
MODELS=(
# --- LLaMA 2 family ---
"meta-llama/Llama-2-7b-hf"
"meta-llama/Llama-2-7b-chat-hf"
"meta-llama/Llama-2-13b-hf"
"meta-llama/Llama-2-13b-chat-hf"
"meta-llama/Llama-2-70b-hf"
"meta-llama/Llama-2-70b-chat-hf"
# --- LLaMA 3 family ---
"meta-llama/Meta-Llama-3-8B"
"meta-llama/Meta-Llama-3-8B-Instruct"
"meta-llama/Meta-Llama-3-70B"
"meta-llama/Meta-Llama-3-70B-Instruct"
# --- Mistral / Mixtral ---
"mistralai/Mistral-7B-v0.1"
"mistralai/Mistral-7B-Instruct-v0.2"
"mistralai/Mixtral-8x7B-v0.1"
"mistralai/Mixtral-8x7B-Instruct-v0.1"
"mistralai/Mixtral-8x22B-Instruct-v0.1"
# --- Gemma (Google) ---
"google/gemma-2b"
"google/gemma-2b-it"
"google/gemma-7b"
"google/gemma-7b-it"
# --- Vicuna (instruction-tuned LLaMA) ---
"lmsys/vicuna-7b-v1.3"
"lmsys/vicuna-13b-v1.3"
"lmsys/vicuna-33b-v1.3"
"TheBloke/vicuna-7B-v1.5-GGUF"
"TheBloke/vicuna-13B-v1.5-GGUF"
# --- Falcon ---
"tiiuae/falcon-7b"
"tiiuae/falcon-7b-instruct"
"tiiuae/falcon-40b"
"tiiuae/falcon-40b-instruct"
# --- WizardLM / WizardCoder ---
"WizardLM/WizardLM-7B-V1.0"
"WizardLM/WizardLM-13B-V1.0"
"WizardLM/WizardLM-70B-V1.0"
"WizardLM/WizardCoder-15B-V1.0"
"WizardLM/WizardCoder-Python-7B-V1.0"
# --- OpenHermes ---
"teknium/OpenHermes-2.5-Mistral-7B"
"teknium/OpenHermes-2.5-Llama-13B"
"teknium/OpenHermes-2.5-Llama-70B"
# --- Pythia (EleutherAI) ---
"EleutherAI/pythia-1b"
"EleutherAI/pythia-1.4b"
"EleutherAI/pythia-2.8b"
"EleutherAI/pythia-6.9b"
"EleutherAI/pythia-12b"
# --- StableLM (Stability AI) ---
"stabilityai/stablelm-3b-4e1t"
"stabilityai/stablelm-7b-sft-v7-epoch-3"
"stabilityai/stablelm-2-1_6b"
"stabilityai/stablelm-2-12b"
# --- Alpaca / Instruction-tuned small
models--
"tatsu-lab/alpaca-7b"
"yahma/alpaca-13b"
"chavinlo/alpaca-native-4bit-ggml"
# --- Baichuan ---
"baichuan-inc/Baichuan2-7B-Base"
"baichuan-inc/Baichuan2-7B-Chat"
"baichuan-inc/Baichuan2-13B-Base"
"baichuan-inc/Baichuan2-13B-Chat"
# --- Nous Hermes ---
"NousResearch/Nous-Hermes-13b"
"NousResearch/Nous-Hermes-Llama2-7b"
"NousResearch/Nous-Hermes-2-Mistral-7B-DPO"
"NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFT"
# --- Chronos (time-series LLM) ---
"ArizeAI/chronos-7b"
"ArizeAI/chronos-13b"
# --- Others (grab-bag popular models) ---
"mosaicml/mpt-7b"
"mosaicml/mpt-7b-instruct"
"mosaicml/mpt-30b"
"mosaicml/mpt-30b-instruct"
"Salesforce/xgen-7b-8k-base"
"Salesforce/xgen-7b-8k-inst"
"CohereForAI/c4ai-command-r-v01"
"CohereForAI/c4ai-command-r-plus"
"allenai/OLMo-7B"
"allenai/OLMo-7B-Instruct"
# --- Phi models (Microsoft small models) ---
"microsoft/phi-1_5"
"microsoft/phi-2"
"microsoft/phi-3-mini-4k-instruct"
"microsoft/phi-3-mini-128k-instruct"
# --- Zephyr ---
"HuggingFaceH4/zephyr-7b-alpha"
"HuggingFaceH4/zephyr-7b-beta"
# --- SmolLM (tiny efficient models) ---
"HuggingFaceTB/smollm-135m"
"HuggingFaceTB/smollm-360m"
"HuggingFaceTB/smollm-1.7b"
# --- Gemma finetunes from
community (examples) ---
"TheBloke/gemma-7b-it-GGUF"
"TheBloke/gemma-2b-it-GGUF"
# (… add more community variants,
GGUF conversions, quantized
forks from TheBloke, etc.)
)
👉 This block already covers ~80 distinct repos.
When you add their quantized forks (e.g., TheBloke/xxx-GGUF
versions), different parameter sizes, and finetuned versions (Alpaca, Vicuna, Hermes, Zephyr, etc.), you quickly exceed 200 models total.
Next steps
- Copy the above into your script.
- Run in chunks (e.g., 20–30 models at a time) to avoid network/timeouts.
- Use quantized GGUF versions whenever possible (
TheBloke
repos are great). - Track progress in your JSON catalog (from my previous message).