Local adapter server
OpenAdapter listens on 127.0.0.1 and exposes local
OpenAI-compatible endpoints for tools and clients on your Mac.
Native macOS developer utility
OpenAdapter runs a local OpenAI-compatible adapter for developer AI workflows, with bundled local models, optional third-party cloud providers, and no ads, analytics, telemetry, or OpenAdapter-operated cloud relay.
What it does
OpenAdapter listens on 127.0.0.1 and exposes local
OpenAI-compatible endpoints for tools and clients on your Mac.
Use bundled Gemma 4 IT local models for offline inference. Prompts and responses for bundled local models stay on your Mac.
Configure your own third-party cloud provider keys when you want a remote model. Requests are sent only to the provider you select.
Privacy-first defaults
API keys and the local bearer token are stored in macOS Keychain.
Persistent logs store request metadata only by default, not prompt or response bodies.
OpenAdapter's developer does not collect any data from this app.
The app does not include ads, analytics SDKs, telemetry, or tracking.
For App Store users
When you use a bundled local model, inference runs locally on your Mac. When you configure a third-party cloud AI provider, OpenAdapter sends the prompts, instructions, tool metadata, and other request data needed for that model from your Mac to the provider you selected. The provider's terms, privacy practices, quotas, and charges may apply.