LLM Gateways¶
These gateways provide access to various LLMs, enabling users to leverage the capabilities of different models for their specific needs.
Online¶
The following ones are hosted online and can be used directly in the browser.
Perplexity¶
AI-powered search engine specializing in real-time web information retrieval and synthesis. Combines advanced language models with search capabilities to provide comprehensive, up-to-date research results with integrated fact-checking and citations.
Online Service: Perplexity
Hugging Face Spaces¶
Leading platform hosting transformer models, comprehensive benchmarks, and training/evaluation datasets. Provides essential infrastructure for AI model development, testing, and deployment, serving as a central hub for the machine learning community.
Online Service: Hugging Face Spaces
Offline¶
These offline LLM gateways provide users with the ability to run AI models locally, ensuring data privacy, and the flexibility to work without an internet connection after the initial download. However, these require a semi-capable machine to be performant--for example to run a 7B model, preferably a Mac Silicon with 8GB RAM or a Windows with an NVIDIA GPU.
Anaconda AI Navigator¶
Comprehensive tool for running and managing large language models locally. Provides access to over 200 curated LLMs, including popular models like Llama 3, Gemma, and Mistral. Features include local API server, built-in AI assistant, and offline capability, ensuring data privacy and security.
Download: AI Navigator
Ollama¶
Open-source tool that allows users to run large language models locally on their machines. It supports various LLM models such as Llama 3, Phi3, Gemma, and Mistral, providing a simple command-line interface for chat interactions and a REST API for integration.
Download: Ollama
LMStudio¶
Versatile software for exploring, downloading, and operating local LLMs like LLaMa, Falcon, MPT, and StarCoder. It supports offline model running on Mac (M1/M2), Windows, and Linux, offering an in-app Chat UI and an OpenAI-compatible local server.
Download: LM Studio
WebLLM¶
Browser-based LLM service specializing in client-side processing, enabling local model execution directly in web browsers without dependencies. It allows users to run AI models offline once they've been loaded into the browser.
Access: WebLLM