User Guide · Version 1.0
TealKit is a mobile AI agent powered by the Model Context Protocol (MCP). Connect it to any compatible LLM (OpenAI, Anthropic Claude, Google Gemini, Mistral, Ollama …) and your own data sources — documents, emails, websites, files — then let the AI work through multi-step agents automatically.
Everything runs on your device. No data is sent to TealKit servers.
| Platform | Notes |
|---|---|
| Android | Output files default to internal app storage. Set a custom path in Settings → General → Output Directory to use a visible folder (e.g. Downloads). Background agent scheduling uses WorkManager. |
| iOS | API keys are stored in iOS Keychain. File access via the Files app requires tapping Choose Directory — the system file picker remembers the grant. Background agents run via BGTaskScheduler. |
| Windows | The Windows desktop app is available now on the Microsoft Store. The Linux version is downloadable from GitHub (see Desktop Features §17). macOS support is coming in a later release. Purchasing any TealKit mobile version (Android or iOS) unlocks the full desktop version at no extra cost — one purchase covers all platforms. |
gpt-4o, claude-sonnet-4-5,
or gemini-2.5-flash.
http://localhost:11434
as the base URL and leave the API key blank.| Provider | API key & registration | Recommended models (2025) |
|---|---|---|
| OpenAI | platform.openai.com/api-keys | gpt-4o, gpt-4o-mini, o3-mini, o4-mini |
| Anthropic | console.anthropic.com | claude-opus-4-5, claude-sonnet-4-5, claude-3-5-haiku-20241022 |
| Google Gemini | aistudio.google.com (free tier available) | gemini-3.1-pro, gemini-2.5-pro, gemini-2.5-flash |
| Mistral | console.mistral.ai | mistral-large-latest, mistral-small-latest |
| Azure OpenAI | Azure Portal → Azure AI Foundry → Keys & Endpoint | Depends on deployment name |
| Ollama (local) | No key needed — ollama.com for setup | llama3.1, phi4, mistral, qwen2.5 |
When configuring an LLM, you will find several advanced settings. Not all parameters shown in certain models or external guides apply to TealKit, but here is a quick helper for the ones available:
0.0 – 0.3) for precise, factual agents to avoid hallucinations. Use higher values (0.7 – 1.0) for more varied, natural language.0 for unlimited. Reduce this to prevent very large tool outputs (e.g. a huge web page) from consuming the context window.1,500,000. Adjust based on your model's context window.10–40) make output more focused; higher values allow more variety. Leave blank to use the provider’s default. Most effective with Ollama and local models.0.9 keep output diverse but coherent. Set to 1.0 to disable. Leave blank for provider default.1.0 (e.g., 1.1) reduce repetition; 1.0 means no penalty. Useful for long-form outputs. Leave blank for provider default.TealKit lets you configure a second, independent AI model dedicated to code generation. This is useful for keeping a powerful (but more expensive) primary model for reasoning and agent execution, while using a faster or cheaper model for writing shell scripts and JavaScript tools.
Configure LLM 2 under Settings → LLM Settings → LLM 2 tab. The same providers and parameters are available as for the primary model.
gemini-2.5-flash or mistral-small-latest works well as LLM 2 — saving your primary model budget for complex reasoning agents.Here are some examples of what you can accomplish in the Playground or build as automated Agents:
disk_usage and the raw output becomes ${task_result}. Add a conditional trigger: “if ${task_result} > 65 %” → route to following agent diskusage_alert.The Playground is an interactive chat where you experiment with the AI and its tools in real time.
${task_result} for a chained agent. Useful when you want
unmodified tool output (e.g. SSH command result, web scrape, document search) to flow
directly into a downstream agent.Agents are saved automation workflows. Each agent packages an LLM, tools, a system prompt, and an initial message — ready to run with one tap. Free tier: up to 3 agents. PRO removes the limit.
Automate any agent to run on a schedule:
0 8 * * 1-5 for weekday mornings)Tap a completed run entry to view detailed stats:
| Metric | Description |
|---|---|
| Duration | Total wall-clock time for the run |
| Status | Success / Failed / Cancelled |
| Tokens used | Prompt + completion token count |
| Characters | Output character count |
| Tool calls | Number of MCP tool calls made |
| Messages | Total conversation turns |
Control the output format through your system prompt or initial message:
.md), HTML, JSON, or plain textexcel server to produce a .xlsx spreadsheet from any structured datachart or mermaid servers to produce .png diagram or chart imagesAgent results can be delivered automatically to one or more channels when a run finishes. Each channel has its own send condition and can optionally attach the output files. Global credentials are configured once in Settings → Data Sources; per-agent overrides are set on the agent's Output tab.
Any generated file can be attached — not just the Markdown log.
This includes .xlsx spreadsheets (from the excel server),
.png charts and diagrams (from chart / mermaid),
.html previews, and .md result files.
Enable the Include output files toggle on the agent's Output tab
to have all generated files forwarded to every active delivery channel.
Have the agent result emailed automatically when done. Configure in the agent's Output section:
.md result, .html preview, .xlsx spreadsheets,
.png charts and diagrams, and any other files the AI created during the runPost the agent result to a Slack channel automatically. Creating a Slack App is free — create one at api.slack.com. Two auth modes are supported:
https://hooks.slack.com/services/…xoxb-…) — enables real file uploads. Steps:
chat:write, files:write, files:read.xoxb-…/invite @YourBotName.Per-agent options on the Output tab:
.md result log, .html preview, .xlsx spreadsheets,
.png charts & diagrams, and any other files produced during the run
(or the ZIP if ZIP output is enabled).
Bot Token required for native file uploads; Webhook mode embeds text content inline.Send the agent result to a WhatsApp number via the Meta Business Cloud API. Registration is free and includes 1,000 conversations / month at no cost. Requires a Meta Business account with a verified phone number.
whatsapp_business_messaging permission.Per-agent options on the Output tab:
+49170…);
leave blank to use the global default.xlsx,
.png, .md, .html, …) are uploaded via the
WhatsApp Media API and sent as separate document messages after the text summary.
Meta Cloud API only — CallMeBot (text-only) lists file names in the message instead.Upload agent results and generated files to a remote server automatically via SFTP when a run finishes. Configure SSH/SFTP credentials once in Settings → Data Sources → SSH.
Per-agent options on the Output tab:
.md, .html, .xlsx, .png, …) to the remote directoryBundle the full output folder (result + logs + attachments) into a single ZIP file. Combine with Email output to receive the archive by mail after every run.
Chain agents so the output of one feeds into the next. Wherever ${task_result}
appears in a chained agent’s system prompt or initial message, it is replaced with the
triggering agent’s output.
Enable Following agent mode on an agent (Basic tab) to mark it as a chained follow-up. Following agents only run when triggered by another agent — they are hidden from standalone agent lists and scheduling.
When setting up a trigger on an agent, choose between:
Enable Stop after tool call on an agent (Basic tab) or in Playground setup.
When set, the agent executes one tool call, captures the raw output, and stops immediately
— the raw tool output becomes ${task_result} for the next chained agent with
no further LLM processing. Ideal for data-extraction agents where you need the exact,
unmodified tool response (e.g. SSH output, search result, document snippet) to flow into a
downstream agent.
disk_usage → raw output becomes ${task_result} →
Agent B checks condition “usage > 80 %” →
on match: Agent C (Email) sends an alert →
on no match: silent — chain ends.${task_result} anywhere in the chained
agent’s prompt. Example: “You received the following data:
${task_result} — identify the top 3 action items.”The system prompt field in the agent editor supports multi-step prompts using the
++#++ separator. TealKit visualises each section as a separate collapsible tile,
making it easy to read and edit individual steps without scrolling through a single large
text block.
For the full syntax, placeholder reference, and worked examples, see Section 5.1 — Prompt Splitting.
The agent editor shows a Preview button (👁 eye icon) next to the System Prompt header. Tapping it assembles the complete effective prompt that will be sent to the LLM at runtime — including the date/time header, toolbox guidance, tool capability hints, and any auto-injected tool skills — and displays it in a scrollable, editable dialog.
Prompt Splitting lets you break a single agent or Playground prompt into sequential steps
using the ++#++ separator (on its own line). Each step runs as a complete, independent LLM call —
the model handles one focused task at a time instead of trying to do everything in one long prompt.
Use the ${tool_result} placeholder in a later step to inject the raw tool output
(or LLM response text) from the previous step.
First step prompt
++#++
Second step prompt that uses ${tool_result}
++#++
Third step ...
++#++ on its own line (the separator must appear as a standalone line).${tool_result} is replaced with the raw tool output of step N−1,
or with step N−1’s final LLM response if no tool was called.Fetch the disk usage report by calling the check_disk_usage tool.
++#++
Here is the raw disk data:
${tool_result}
Format this as a Markdown table. Highlight any partition above 80 % in bold.
In this example the first step calls the tool; the second step receives the raw tool output and formats it. Each step is simple enough for a 7–14 B local model to handle reliably.
Prompt Splitting (++#++) |
Agent Chaining PRO | |
|---|---|---|
| Setup | One agent, one prompt field | Multiple separate agents |
| Pro required | No | Yes |
| Model per step | Same model for all steps | Different model per agent |
| Conditional branching | No | Yes (LLM-evaluated condition) |
| Individual schedules | No — single schedule, single task | Yes — each agent can run independently |
| Data handoff | ${tool_result} — raw tool output or LLM text from previous step |
${task_result} — full output of previous agent |
| Best for | Small/local models; sequential fetch → format → summarise within one agent; keeping each step focused and reliable | Cross-model pipelines; conditional routing; different output channels per step; complex multi-agent workflows |
++#++. Small models usually handle one clear task per turn much
better than a long multi-goal prompt.Tool Skills are concise usage hints stored per MCP server tool and automatically injected into the effective system prompt at runtime. They help the LLM understand when and how to call each tool correctly, without you having to write this guidance yourself in the system prompt.
TealKit adjusts how much skill text is injected based on the active model:
| Model type | Skill text injected | When injected |
|---|---|---|
| Large models e.g. GPT-4o, Claude Sonnet, Gemini 2.5 Pro |
Full skill text — detailed description of tool parameters and best-practice usage | Always, for every enabled tool |
| Small / compact models (SLMs) ≤7 B parameters: phi, mini, nano, tinyllama, qwen2.5:3b … |
Compact mini skill text — one short line per tool, keeping the prompt lean | Only when the agent has a substantive system prompt (multi-line or >50 characters). A very short or empty prompt receives no skills to avoid token bloat. |
Skills can be generated automatically for any MCP server with a single tap:
You can also write or adjust skills by hand at any time:
Configure data sources in Settings → Data Sources. Each source can be toggled on/off per agent independently.
Connect Gmail via OAuth or configure any IMAP server. The AI can search, read, and optionally send emails.
Choose DuckDuckGo (free, no key needed), Serper.dev, or SerpAPI. Enter your API key for premium providers.
Point to one or more local folders. TealKit indexes all documents into a local DuckDB database using hybrid semantic + keyword search. See Document Search for details.
Add seed URLs and let TealKit crawl and index them locally into a full-text search index. Index before starting the Playground or Agent — crawling can be stopped at any time.
Indexing can be scheduled automatically on a cron schedule (minimum 1-hour intervals: hourly, daily, weekly, or monthly). A Last indexed timestamp is shown and a manual Index Now button is always available. Scheduled re-indexing runs in the background without any user interaction. PRO
Optionally save GPS coordinates. They are injected into every agent so queries like "weather at my location" resolve automatically. Coordinates never leave your device.
Connect via OAuth to search and read files from your Drive.
Connect to a remote SSH server. See the SSH section for full details.
Control your smart home via Home Assistant. Configure the Base URL and a Long-Lived Access Token once in Settings → Data Sources → Home Assistant and the AI can query and control any entity. See the Home Assistant section for full details.
Configure Slack credentials for automatic agent output delivery. Creating a Slack App is free — create one at api.slack.com. Two modes:
https://hooks.slack.com/services/…
Text results are posted as formatted messages; attachments are embedded as code blocks.xoxb-… token and set a default channel.
Enables native file uploads. In your Slack App go to OAuth & Permissions → Scopes → Bot Token Scopes and add chat:write, files:write, files:read; then reinstall the app and copy the xoxb-… token. Invite the bot to the channel with /invite @YourBotName.Configure Meta Business Cloud API credentials for WhatsApp output delivery. Registration is free and includes 1,000 conversations / month at no cost — get started on Meta Developers.
whatsapp_business_messaging permission+49170123456)TealKit ships with a set of built-in MCP servers and supports external servers from the broader ecosystem.
| Server | What it provides |
|---|---|
documents | Semantic + keyword search over your local document folders |
website_search | Crawl and search indexed websites |
web_search | Live web search (DuckDuckGo / Serper / SerpAPI) |
email | Read, search, and send emails via Gmail or IMAP |
google_drive | Search and read files from Google Drive |
toolbox | Current time, timezone, device location, and city geocoding |
ssh | Run shell commands on remote SSH hosts PRO |
home_assistant | Control smart home devices via the Home Assistant REST API PRO |
weather | Current weather and forecast (uses location if available) |
file | Create text, Markdown, or HTML output files |
excel | Convert CSV / JSON / text data to an Excel .xlsx file PRO |
chart | Generate PNG charts from numeric data series — chart types: line, bar, area, pie, scatter, histogram, and statistics_summary (4-panel dashboard). Optional parameters: title, axis labels, axis rotation, and custom line colors. PRO |
mermaid | Render Mermaid diagram syntax (flowcharts, sequence diagrams …) to PNG images PRO |
pdf | Generate PDF documents from AI-produced content PRO |
TealKit gives you three distinct and complementary ways to add tools to the AI. Understanding the difference helps you pick the right approach for each agent:
| Approach | What it is | Platforms |
|---|---|---|
| Remote MCP Servers | Cloud-hosted servers you connect to over HTTPS/SSE — nothing installed locally | All (mobile + desktop) |
| MCP Server Registry PRO | Public Node.js & Python servers you install locally with one click | Desktop only |
| Custom Tools / Scripts PRO | Your own mini MCP servers — shell scripts, JS snippets, or Python tools the AI calls as standard MCP tools | All (mobile + desktop) |
Connect to any MCP-compatible server running in the cloud over HTTP Streaming or SSE — no local installation needed. Go to Settings → Remote MCP Servers and choose a catalog source:
| Tab | Source | Notes |
|---|---|---|
| PulseMCP | registry.modelcontextprotocol.io | Browse & connect to hosted MCP endpoints |
| Smithery | smithery.ai | Optional global API key applied to all Smithery endpoints |
| Custom | Any URL | Server URL, endpoint path (/mcp), optional API key & password |
Each configured server exposes its tools directly in the Playground and Agents tool selector. Remote servers work on all platforms including Android and iOS.
On desktop (Windows, Linux, macOS) TealKit can download and run MCP servers locally. These are real Node.js and Python servers installed on your machine with one click — no terminal needed. Go to Settings → MCP Server Registry and browse four public catalogs:
| Tab | Source | Install method |
|---|---|---|
| GitHub | Hand-picked catalog maintained by TealKit | Python (uvx / pip) & Node.js (npm / npx) |
| Glama | glama.ai/mcp/servers | Python & Node.js installable |
| PulseMCP | registry.modelcontextprotocol.io | Python & Node.js installable |
| Smithery | smithery.ai | Python & Node.js installable |
Tap Install on any entry — TealKit runs npm install -g, uvx, or pip in the background.
The installed server appears immediately in the tool selector. Uninstall removes the package and cleans up the entry.
Every script or tool you create in TealKit is exposed to the AI as a local MCP server tool — the AI calls it in exactly the same standardised way it calls any cloud server. This means you can build powerful agent automation without writing MCP server boilerplate:
| Tool type | Runs on | How the AI calls it |
|---|---|---|
| Shell / PowerShell script | SSH remote host or local machine | Via ssh_bridge / ps_bridge MCP server |
| JavaScript snippet | On-device secure sandbox | Via js_bridge MCP server |
| Python tool | Local Python environment (desktop) | Via python_bridge MCP server |
The LLM sees a clean tool name and input schema — it never knows (or cares) whether the tool is a cloud endpoint, an npm package, or a 20-line PowerShell script you wrote yesterday. All three share the same MCP protocol.
TealKit lets you build your own tools without writing any native code. Two wizards are available — accessible from the Tools tab in Playground settings or when editing a Agent. Free tier: 1 Shell script, 1 JavaScript tool, and 1 Python tool. PRO gives unlimited tools of each type.
Describe what you need in plain language and the AI writes a ready-to-use shell script. The script type is chosen automatically based on the target platform:
.ps1 PowerShell script.sh Bash scriptSaved scripts are auto-discovered by the SSH server and can be referenced by name in any agent.
Tips for a good shell script prompt:
Write a custom MCP tool as a small JavaScript snippet. The snippet runs in a secure on-device sandbox and can call REST APIs with fetch(). Saved tools appear immediately in the tool selector for Playground and Agents.
Tips for a good JavaScript tool prompt:
Export the complete tool list of any MCP server in a machine-readable format — ideal for fine-tuning LLMs on tool-use data or for documenting your available tools.
The export icon (🧠 model_training) appears in multiple places:
Choose from four export formats:
| Format | Use case |
|---|---|
| OpenAI Functions | JSON array in OpenAI tools schema (function calling) |
| Anthropic Tools | JSON array in Anthropic tools schema |
| Markdown | Human-readable documentation with tool names, descriptions, and parameters |
| JSONL Fine-tuning | One JSON object per line, formatted for LLM fine-tuning datasets |
Tap Copy to copy the output to the clipboard, or Save to file to write it directly to disk (desktop platforms).
TealKit's document search uses a local DuckDB database with hybrid semantic embedding + BM25 keyword search for high-quality results — no cloud dependencies.
Select which file types to include using the chip selector. Supported types:
pdf docx doc txt md
html htm csv json xml
xlsx xls pptx ppt
rtf odt ods odp
epub mobi log
Use All to select every type or Reset to deselect all.
The SSH server gives the AI the ability to run shell commands on a remote host. Configure the connection in Settings → Data Sources → SSH:
Ask the AI to write and immediately execute a script. Example prompt:
Connect to the server, write a bash script that gathers CPU usage every 5 seconds for 1 minute and saves the results to /tmp/cpu_report.txt, then show me the summary.
The AI writes the script, uploads it via SSH, executes it, and returns the full output — all in one agent run.
Place your own .sh or .ps1 scripts in the
Scripts Directory (set in Settings → General → Scripts Directory).
The AI auto-discovers all scripts and you can reference them by name in prompts:
Run the "backup_db.sh" script on the server and tell me if it completed successfully.
The Shell Script Library (accessible from the SSH configuration panel) lets you save, manage, and reuse scripts. Tap the 🔬 Load Samples button in the toolbar to instantly add three ready-to-use examples:
The Home Assistant MCP server lets the AI query and control any entity in your smart home via the Home Assistant REST API.
http://homeassistant.local:8123 for local installs or your Nabu Casa cloud URL (https://<id>.ui.nabu.casa).home_assistant server is ready to use in any Playground session or Agent.light.* entities)Turn off all lights in the living room.
What is the current temperature in the bedroom?
Set the thermostat to 21 degrees and lock the front door.
Which lights are currently on?
TealKit saves core files per agent run to a timestamped subfolder:
output_log.md — the main result in Markdownexecution_log.md — step-by-step tool call logoutput.html — rendered HTML version of the resultIn addition, any file the AI generates during the run is saved automatically:
.xlsx — Excel spreadsheets created by the excel server
(e.g. “Export the data as an Excel file”).png — charts generated by the chart server or
Mermaid diagrams rendered by the mermaid server.md, .html, .csv, .json files
the AI writes via the file serverGo to Settings → General and tap Output Directory to choose a folder. Defaults to an internal app folder if left blank.
Use the Keep files for (days) setting (1–60 days, default 3) to automatically delete old runs. Cleanup runs on app start and every hour.
No. All API keys are stored exclusively in the device's OS secure keychain (iOS Keychain / Android Keystore) and never leave your device.
Local Ollama models work fully offline. Cloud AI providers and web search require an internet connection.
Uninstall TealKit. All local databases, keys, and settings are removed automatically.
Large document folders or websites with many pages take time. Tap the red Stop button to cancel at any point.
By default, files go to internal app storage. Set a custom path in Settings → General → Output Directory to use a visible folder (e.g. Downloads).
Prompt Splitting (++#++) breaks a single agent’s prompt
into sequential steps within the same agent. All steps share the same model, tools,
and settings. Use ${tool_result} in a later step to inject the raw tool output
from the previous step. No Pro required — works in Playground, scheduled agents, and
Server Mode. Best for small/local models that struggle with long multi-step prompts: split the
work into focused single-purpose steps that each model can handle reliably.
Agent Chaining PRO connects separate agents:
each agent has its own model, tools, schedule, and output configuration. The previous
agent’s full output is injected as ${task_result} into the next agent’s
prompt. Supports conditional routing — different follow-up agents depending on the
LLM’s evaluation of a condition expression.
Short rule: use Prompt Splitting when you want to break a complex task into micro-steps within one agent; use Agent Chaining when you need different models, conditional logic, or independent schedules across steps. See Section 5.1 for a full comparison table.
Each chained agent starts after its predecessor finishes. The predecessor’s output is
injected as ${task_result} into the chained agent’s prompt. Chaining can be
unconditional (always runs the next agent) or conditional
(the LLM evaluates an expression and routes to different agents depending on the outcome).
Enable Stop after tool call on an agent to capture the raw tool output
directly as ${task_result} without further LLM processing.
Run executes the agent autonomously and saves the result to disk. Interactive opens a full chat with the same LLM and tools, so you can guide the AI step by step — useful for exploration or debugging a agent.
If GDPR compliance or data sovereignty is important to you — for example in a business or regulated environment — choose Mistral AI, a provider headquartered in France that processes all data within the European Union. TealKit is fully compatible with Mistral out of the box: enter your API key under Settings → LLM 1 and select Mistral AI as the provider. Your prompts and data stay in EU infrastructure at all times.
Yes. Go to Settings → Embedded Models to download GGUF models and run inference fully on-device — no API key, no cloud connection. Browse the built-in HuggingFace catalog or paste any direct GGUF URL. Choose CPU-only, partial GPU, or full GPU offloading per model. Embedded models work best for text formatting, translation, and summarisation in Chat Mode. For agentic tool calling you need both a model explicitly trained for function calling (e.g. Qwen2.5-3B-Instruct) and enough GPU VRAM to load it fully — otherwise inference is too slow or tool schemas are ignored. See Section 19 — Embedded Models for the full hardware and capability guide.
Yes. TealKit supports Small Language Models (SLMs) through Ollama, LM Studio, or any OpenAI-compatible local endpoint — no cloud costs, no data sent externally. Keep in mind that every model has different strengths: a prompt that works perfectly with one model may need adjustment for another. Use the Playground to experiment with prompts across different models and find the best fit for each agent before turning it into an automated workflow. For pure text agents (translation, formatting, summarisation) enable Chat mode in Playground or agent settings to skip all tool overhead and send your prompt directly to the LLM — the fastest path from prompt to response for SLMs.
This is expected behaviour. Embedded (on-device) GGUF models require the app to be open (foreground or background-alive). When Android fully kills the app, the background alarm wakes a lightweight isolate that cannot load the on-device model — doing so would risk a crash or out-of-memory kill. TealKit skips the task and sends a 📱 “Open TealKit to run ‘…’” notification instead. Tapping the notification opens the app and the missed run is caught up automatically.
For reliable unattended execution, switch the agent’s LLM to a cloud provider (Gemini, OpenAI, Anthropic, Mistral) or your own Ollama server. Cloud- and Ollama-based agents run fully in the background regardless of whether the app is open. On Desktop (Windows, Linux, macOS) the app always stays alive in the system tray, so embedded-model agents run normally there. The upcoming Server Mode also keeps a persistent scheduler running 24/7 without the mobile app needing to be open.
On iOS, Apple’s BGAppRefreshTask system controls when background work is allowed. Unlike Android’s exact AlarmManager wakeup, iOS decides the timing itself based on battery level, network conditions, and recent app-usage patterns. Tasks may be delayed by minutes to several hours and will not fire at all when Low Power Mode is on or the app has not been used recently. TealKit registers a best-effort periodic background task; keep the app in the foreground or background (not fully closed / force-quit) for the most reliable scheduled execution on iOS.
The JavaScript Tool Wizard lets you create lightweight custom MCP tools using pure JavaScript — no native code or external packages required. Tools run in a secure on-device sandbox (QuickJS / JavaScriptCore) and are available immediately in Playground and Agents after saving.
Every generated tool must define exactly one object named generatedTool:
const generatedTool = {
name: "bitcoin_price",
description: "Fetch the current Bitcoin price from CoinGecko",
inputSchema: {
type: "object",
properties: {
currency: { type: "string", description: "Target currency code, e.g. usd" }
},
required: ["currency"]
},
execute: async (args) => {
try {
const cur = String(args?.currency ?? "usd").toLowerCase();
const res = await fetch(`https://api.coingecko.com/api/v3/simple/price?ids=bitcoin&vs_currencies=${cur}`);
const data = await res.json();
return JSON.stringify({ ok: true, price: data.bitcoin[cur], currency: cur });
} catch (error) {
return JSON.stringify({ ok: false, error: String(error?.message || error) });
}
}
};
execute(args) must return a JSON string via JSON.stringify(…) or a plain stringfetch() is available for HTTP/REST callsconsole.log() output is captured and shown in test resultsrequire, import, process, fs, path, net, child_processawait outside execute()generatedTool snippet including schema.
When the js_bridge server is selected, the AI can discover and call your tools automatically:
list_js_tools — returns all saved tools with their schemasrun_js_tool — call by name: { "toolName": "bitcoin_price", "args": { "currency": "eur" } }js_<name> tools (e.g. js_bitcoin_price)TealKit tracks token consumption and estimates API cost per session and per agent run. No data is sent to TealKit — all calculations use built-in pricing tables on your device.
During a Playground chat, a token counter appears at the bottom of the chat area showing the cumulative tokens consumed and characters sent in the current session. The counter turns amber when approaching the model’s configured token warning threshold.
Tap the token counter to open the full detail sheet:
| Field | Description |
|---|---|
| Cumulative Tokens | Total prompt + completion tokens used since session start |
| Prompt Tokens | Tokens sent to the LLM (cumulative) |
| Completion Tokens | Tokens generated by the LLM (cumulative) |
| Last Request Tokens | Tokens used by the most recent LLM call |
| Last Request Cost | Estimated USD cost of the most recent call |
| Session Cost (est.) | Estimated total USD cost for the entire session |
| Model Pricing Info | Input / output price per 1 M tokens for the selected model |
Every completed agent run records full statistics. Tap any run entry in the agent history to view:
| Metric | Description |
|---|---|
| Duration | Total wall-clock time for the run |
| Status | Success / Failed / Cancelled |
| Tokens used | Total prompt + completion tokens |
| Chars sent | Characters sent in LLM requests |
| Tool calls | Number of MCP tool calls made |
| Messages | Total conversation turns |
| Last price | Estimated cost of the final LLM call (USD) |
| Total price | Estimated total session cost for this run (USD) |
Cost estimates are calculated from built-in pricing tables for:
Cumulative Tokens : 4 820 Prompt Tokens : 4 210 Completion Tokens : 610 Last Request Cost : $0.0062 Session Cost : $0.0241
Quick practical suggestions to get the most out of TealKit.
Open the JS Tool editor and describe the function precisely, including inputs, outputs, and the API to call:
Fetch the current Bitcoin price in EUR and USD from the free CoinGecko API.
Input: { currency: string }. Return: { price, currency, lastUpdated }.
Tap Generate, inspect the code, then Test Run it in the sandbox before saving. The tool is instantly available in any session.
Open the Shell Script Wizard and state the target environment, inputs, and expected behaviour:
Bash on Ubuntu. The script takes a folder path as argument, finds all .log files older than 7 days, deletes them, and prints a summary (count and freed space) to stdout. Exit code 1 on error, error message to stderr.
After generation, tap Test Run to execute via SSH and verify the output before saving to the script library.
Before activating a cron schedule, open the agent with Interactive / Try-Out mode. This loads the agent’s configuration into a live Playground chat so you can step through the AI’s tool calls and refine the system prompt — without creating any scheduled run history.
Go to Settings → Remote MCP Servers → Custom tab and fill in:
/mcp (default)Tap Test to validate connectivity and list the tools, then Add. The server’s tools appear immediately in the tool selector.
In the Playground: tap the token counter at the bottom of the chat → the detail sheet shows last-request cost and cumulative session cost.
For Agents: open the agent → tap a completed run entry → the Last Run Statistics card shows Last price and Total price.
Tap the Reset Chat button (↺ icon in the toolbar, or the button below the last message) to:
The AI starts fresh with no memory of the previous conversation — useful when switching topics or when context has grown too large.
Purchasing TealKit Pro on Android or iOS includes the Windows and Linux desktop versions at no extra cost — one purchase, all platforms. The Windows app is on the Microsoft Store; the Linux build is downloadable from GitHub Releases. macOS support is coming in a future release.
TKIT-… key → tap Activate.TealKit includes a 31-day free trial with full Pro access. After the trial expires without Pro, TealKit stays available in a local manual mode: you can still run agents manually, save results to local files, and use the basic local bridges. Pro unlocks the advanced automation and integration features:
The Windows desktop app is available on the Microsoft Store. The Linux version is downloadable from GitHub Releases. macOS support is coming in a later release. Buying any single TealKit mobile version unlocks the desktop app at no extra cost.
eval $(gnome-keyring-daemon --start --components=secrets), or install
gnome-keyring if it is not present on your system.Generate ready-to-run scripts directly on your local machine. Describe your agent in plain language and the AI writes the complete script, which you can run immediately or save to your script library. The script type is chosen automatically based on your platform:
.ps1).sh)The Shell Script Library retains all your generated and saved scripts. Tap 🔬 Load Samples to add three ready-to-use examples:
Create custom MCP server tools written in Python directly from within TealKit. Describe what you need in plain language and the AI generates a complete, runnable Python MCP tool including input schema and execute function.
json, datetime, math, re, urllib …)urllib.request or the bundled requests libraryThe Python Tool Library keeps all your generated Python MCP tools. Tap 🔬 Load Samples to add three starter examples:
psutilOn Windows, a dedicated PowerShell Tool Library lets you create, manage, and reuse .ps1 scripts executed locally by the AI via the ps_bridge MCP server. Tap 🔬 Load Samples to add three ready-to-use examples:
ps_bridge server is available in any Playground session or Agent configured on Windows.The MCP Server Registry lets you install Python and Node.js MCP servers locally with a single tap — no terminal, no package manager commands.
Go to Settings → MCP Server Registry.
TealKit automatically runs npm install -g, uvx, or pip in the background and registers the server.
Installed servers run locally on your machine and are started automatically by TealKit when needed. Their tools appear in the same tool selector as built-in and remote servers.
| Tab | Source | Install method |
|---|---|---|
| GitHub | TealKit curated catalog (hand-picked, tested) | npm / npx / uvx |
| Glama | glama.ai/mcp/servers | npm / uvx |
| PulseMCP | registry.modelcontextprotocol.io | npm / uvx |
| Smithery | smithery.ai | npm / uvx |
uvx (included with TealKit — no extra install needed).C:/Users/Me/Documents.
npm install -g @modelcontextprotocol/server-filesystem in the background.
list_directory, read_file, write_file, and search_files.
allowed_dirs is required. Use forward slashes (e.g. C:/temp) to avoid Windows path-quoting issues.The most expensive part of any automated agent is the back-and-forth between the LLM and the outside world. Every tool call round-trip uses tokens and time. A simple optimisation: delegate the heavy lifting to a script or MCP tool and let the LLM only interpret the final result.
Example — listing recent uploads via SSH:
Instead of asking the LLM to run multiple SSH commands to filter files by date, format
sizes, and build a table, create a single shell script in the
Script Library (e.g. check_uploads) that does all of that and
returns a clean CSV with column headers. Your agent prompt then becomes simply:
Call script check_uploads /uploads 48 Create an Excel file from the returned CSV list.
The LLM makes two tool calls (run script, create file) instead of ten. Fewer tokens, faster execution, lower API cost, and a more reliable result.
The Shell Script Wizard and JavaScript Tool Wizard can generate working code from a plain-language description. A clear prompt that specifies inputs, outputs, and edge cases gives the best results. However:
On desktop, three scripting environments are available — each with its own wizard and library:
| Platform | Script type | Wizard / Library | When to use |
|---|---|---|---|
| Windows | .ps1 PowerShell |
PowerShell Tool Library Settings → Desktop → PowerShell Tools |
Local system administration, Windows API, registry queries, Active Directory, WMI/CIM |
| Linux / macOS | .sh Bash / zsh |
Shell Script Wizard Playground Tools tab or Agent editor |
File system operations, log parsing, cron helpers, remote SSH commands, package management |
| All platforms | .py Python |
Python Tool Library Settings → Desktop → Python Tools |
Data processing, REST APIs, CSV/Excel manipulation, cross-platform utilities — most portable |
param() blocks so the AI can call the script with named arguments.pandas and openpyxl” — TealKit installs them automatically on first run.requests, or anything that needs a third-party library.gdate vs date -d).Not every agent needs the same model. As a rough guide:
Use the LLM 1 / LLM 2 selector in the Playground to run the same prompt through different models side-by-side. Also experiment with the Temperature setting — lower values (e.g. 0.1–0.2) produce more deterministic, reproducible tool calls; higher values add creativity but reduce reliability for automation agents.
Embedded models let you download GGUF model files and run inference entirely on your device — no API key, no internet connection, no external server required. Go to Settings → Embedded Models to get started.
.gguf download link.
| Task | Mode | Notes |
|---|---|---|
| Text formatting / clean-up | Chat Mode | Fast; no tools needed |
| Translation | Chat Mode | Even 1–3 B models are capable |
| Summarisation | Chat Mode | Keep context short for small models |
| Classification / tagging | Chat Mode | Simple structured output |
| Tool calling / agents | Agent mode | Requires GPU + tool-trained model (see note above) |