PIAdvisor

By Author: Ray Detwiler


PIAdvisor provides processing guidance inside PixInsight. It sends a structured workspace snapshot (metadata, statistics, history, optional attachments) with your question to your selected provider. [more]

Categories: Utilities

Keywords: LLM, PixInsight, astrophotography, processing guidance, workflow, context, chat, processing history, vision, multimodal, OpenAI, Gemini, Ollama

Contents

[hide]

1 Introduction

[hide]

PIAdvisor connects PixInsight to Large Language Models (LLMs) using providers such as OpenAI, Google Gemini, OpenRouter, or local OpenAI-compatible servers (Ollama, LM Studio). For each request, PIAdvisor can collect structured context from your open images—including FITS metadata, processing history, astrometry, STF state, and image statistics—and include it alongside your question.

This allows responses to reference your specific image(s) and current workspace state, such as:

  • Next processing steps based on your workflow stage
  • Troubleshooting artifacts, gradients, or color balance issues
  • Tool parameter recommendations grounded in your image statistics
  • AstroBin or social media descriptions based on your acquisition data

1.1 Core Approach: Structured Context Analysis

PIAdvisor uses Structured Context Analysis: a deterministic context snapshot built from observable image data (dimensions, SNR, FITS headers, processing history) and workspace state.

1.2 Free vs Pro

PIAdvisor is available in Free and Pro tiers:

Free Features

  • Single-image context (active view only)
  • Interactive tool links and workflow guidance
  • History snapshot up to 5 entries
  • Vision attachments up to 1024 max pixels
  • Visual memory up to 5 images
  • Full LLM provider support

Pro Features (License Required)

  • Workspace Mode: Include all open images in the context snapshot (L+RGB, SHO, starless/stars)
  • Higher Limits: History up to 20 entries, vision max pixels up to 2048, visual memory up to 20 images

2 Description

[hide]

2.1 Key Features

Structured Context Analysis

Captures image metadata, FITS headers, processing history, astrometry, ICC profiles, and STF/stretch state. Requests include this structured snapshot alongside your question.

Workspace Mode (Pro)

Include all open images in a single context snapshot. This is useful for multi-channel workflows such as L+RGB or SHO.

Vision Support

Attach view captures or screenshots for vision-capable models. Attachments can help with diagnosing visual artifacts.

Streaming Responses

Render partial output while a request is in progress. If streaming fails before any content is received, PIAdvisor retries once without streaming.

Rich Formatting

Responses include styled callouts, code blocks, and clickable tool links that launch PixInsight processes directly.

Multiple Providers

Supports OpenAI, OpenRouter, Google Gemini, and any OpenAI-compatible API (Ollama, LM Studio, etc.).

2.2 Supported LLM Providers

Supported LLM Providers

Provider

Endpoint

Notes

OpenAI

https://api.openai.com/v1

Official OpenAI API

OpenRouter

https://openrouter.ai/api/v1

Multi-model router (Claude, GPT, Llama, etc.)

OpenAI-compatible (local)

http://127.0.0.1:11434/v1

Default for Ollama. LM Studio typically uses http://127.0.0.1:1234/v1.

Google Gemini

https://generativelanguage.googleapis.com/v1beta/openai

Gemini via OpenAI-compatible endpoint

3 Parameters

[hide]

3.1 Provider

Select the LLM provider to use:

OpenAI

The official OpenAI API (api.openai.com). Use Fetch Models to load available model ids.

OpenRouter

A multi-model router (openrouter.ai) providing access to dozens of models with a single API key.

Google Gemini

Google's Gemini API. Requires a Google AI Studio API key.

OpenAI-Compatible

Any API following the OpenAI chat completions format: Ollama, LM Studio, vLLM, etc.

3.2 API Endpoint

The full base URL of the LLM's API. This is auto-populated when you select a provider, but can be customized for private deployments or proxies.

3.3 API Key

Your API authentication key. For cloud providers, this is required. For local servers (Ollama, LM Studio), you can often leave this as "no-key" or empty.

API keys are stored in user-local configuration files under PixInsight's configuration directory (the PIAdvisor subfolder), not in the Windows Registry.

3.4 Model

The model identifier to use (e.g., "gpt-4o", "gemini-2.5-pro", "openrouter/auto", "llama3.2"). Click Fetch Models to load the list of available models from your provider.

For local servers, the model name must match exactly what you have loaded.

3.5 Override Temperature

When enabled, sends the specified temperature value to the LLM. When disabled, the provider's default is used.

3.6 Temperature

Controls the "creativity" of responses. Range: 0.0 (deterministic) to 2.0 (highly random). Recommended: 0.7 for balanced responses.

3.7 Max Tokens

Limits the maximum length of the LLM's response. Default: 4096. Set to 0 to omit the limit. Valid non-zero range: 256-65536.

3.8 Include History Snapshot

When enabled, includes recent processing history entries (from the History Explorer) in the context sent to the provider. This helps establish what has already been done.

3.9 History Max Items

The maximum number of recent processing history entries to include. 0 disables history. Default: 5 (Free) / 10 (Pro). Max: 5 (Free) / 20 (Pro).

3.10 Enable Vision

When enabled, PIAdvisor can send image attachments to vision-capable models. Attachments are resized to the Vision Max Pixels limit.

3.11 Vision Max Pixels

The maximum resolution (in pixels on the longest side) for image attachments. Options are snapped to 512, 768, 1024, 1536, or 2048. Max: 1024 (Free) / 2048 (Pro).

3.12 Visual Memory Limit

Controls how many recent attached images are retained as historical reference in the conversation. 0 disables visual memory. PIAdvisor still focuses on the newest images attached in the current turn first. Default: 5 (Free) / 6 (Pro). Max: 5 (Free) / 20 (Pro).

3.13 Enable Debug Console Output

Writes debug information to the PixInsight console. Can also be enabled via PIA_DEBUG=1. Available in Dev builds only.

3.14 Enable Rich Formatting

When enabled, instructs the LLM to use structured formatting (callouts, tool links, code blocks) that PIAdvisor renders with custom styling.

3.15 Formatting Template (Markdown)

The formatting instructions injected into the system prompt when rich formatting is enabled. Use Reset -> Template to restore defaults.

3.16 Enable Streaming

When enabled, responses are streamed incrementally as they arrive from the provider. If streaming fails before any content is received, PIAdvisor retries once without streaming.

3.17 System Prompt

The system instructions sent to the LLM at the start of each conversation. A detailed default prompt is provided by PIAdvisor. Advanced users can customize this, but changes may affect response quality.

Reset -> Prompt restores the built-in system prompt.

3.18 Installed Version

The Installed version line in LLM Settings shows the public PIAdvisor release you are running. Hover it to see the full build identifier when support or debugging needs the exact build.

4 Usage

[hide]

4.1 Quick Start

  1. Open PIAdvisor from Process > PIAdvisor or the Process Explorer.
  2. Open the LLM Settings section near the bottom of the panel.
  3. Select a Provider (e.g., OpenAI, OpenRouter, Google Gemini, or OpenAI-compatible for local models).
  4. Enter your API Key (cloud providers require a key; local models may use "no-key" or be empty).
  5. Click Fetch Models to load available models, then select one.
  6. Click Test to verify your settings.
  7. Click Save to persist your configuration.
  8. Check the Installed version line in LLM Settings when you want to confirm which release is running.

4.2 Having a Conversation

  1. Open one or more images in PixInsight. For best results, use plate-solved images with processing history.
  2. In the PIAdvisor panel, type your question in the input box at the bottom.
  3. Press Ctrl+Enter or click Send.
  4. PIAdvisor gathers context from your active image (or all images in Pro mode) and sends it to your selected provider.
  5. The response appears in the chat transcript. Clickable tool links (e.g., SPCC, BlurXTerminator) launch processes directly.

If you simply share an image ("Here's the red channel"), PIAdvisor may answer briefly and conversationally. If you want a detailed review, ask directly for analysis, advice, or next steps.

4.3 Using Workspace Mode (Pro)

Enable Workspace Mode by clicking the Workspace Mode button in the toolbar. When enabled:

  • Context is gathered from all open images, not just the active one.
  • The request context includes related images (e.g., starless + stars, L + R + G + B).
  • Image relationships are inferred from naming patterns and shared metadata.

Use Refresh Workspace to capture new changes across open images without sending a message.

Workspace Mode is a Pro feature. Without an active license, context is limited to the active image only.

4.4 Attaching Images

Enable the Enable Vision option in Settings to attach images. This requires a vision-capable model.

Attach View

Queues a resized copy of the currently selected PixInsight view for your next message.

Screenshot

Loads an image file from disk. Useful for screenshots, annotations, reference images, or examples from outside PixInsight.

Queued attachments appear in the chat composer and can be removed before you send. When sent, they appear inline with your message in the transcript.

PIAdvisor keeps a rolling visual memory of recent images, but it gives the newest images attached in the current turn priority over older ones. If you remove an image from a previous user turn, PIAdvisor reopens that turn so you can continue the conversation from that point.

Attachments are resized to the Vision max pixels limit before sending to control bandwidth and cost.

4.5 Exporting

Export Chat

Saves the entire conversation as a Markdown file for documentation or sharing.

Export Debug (Dev builds)

Creates a support bundle containing context, prompt, request/response, streaming diagnostics, attachments, and logs. API keys are never exported. Useful for bug reports.

4.6 Keyboard Shortcuts

Keyboard Shortcuts

Shortcut

Action

Ctrl+Enter

Send message

5 Licensing

[hide]

PIAdvisor includes Free features without a license. Pro features require activation.

5.1 Activating a License

  1. Open PIAdvisor and click the Preferences (wrench) button.
  2. In the PIAdvisor License dialog, enter your license key (format: PRO-XXXX-XXXX-XXXX or TRIAL-XXXX-XXXX-XXXX).
  3. Click Activate.

Activation verifies your key with the PIAdvisor license server and unlocks Pro features immediately.

5.2 Device Limits

Pro licenses support activation on up to 3 devices simultaneously. Trial licenses support up to 2 devices. If you exceed the limit, you'll receive an error prompting you to deactivate an existing device.

5.3 Trial Licenses

Trials are time-limited Pro licenses (currently 60 days). Trial keys are delivered by email and are never shown in the UI or API responses.

5.4 Deactivating a Device

To free up an activation slot:

  1. Open PIAdvisor on the device you want to deactivate.
  2. Click the Preferences (wrench) button.
  3. Click Deactivate.

5.5 Purchasing a License

License purchase details are provided on the PIAdvisor website. After purchase, your license key is delivered via email and can be activated in the License dialog.

6 Model Selection

[hide]

Use the Fetch Models button to load the available model list from your provider. If your provider does not support model listing, enter the exact model id manually.

If you just need a starting point, these defaults are commonly used:

  • OpenAI: gpt-4o
  • Google Gemini: gemini-2.5-pro
  • OpenRouter: openrouter/auto
  • OpenAI-Compatible (local): llama3.2

7 Troubleshooting

[hide]

7.1 Connection Errors

"Connection failed" or timeout

Verify your API endpoint URL is correct. Use Test to diagnose. Check firewall settings for local servers.

"401 Unauthorized"

Your API key is invalid or expired. Verify it in your provider's dashboard.

"429 Rate Limited"

You've exceeded your provider's rate limits. Wait and try again, or upgrade your API plan.

"Model not found"

The model name doesn't match what's available. Click Fetch Models to see available options.

7.2 Empty or Garbled Responses

  • Try disabling Enable Streaming to use standard requests.
  • Some local models have limited context windows—try reducing history or using a smaller context.
  • Check the Process Console for error messages (Script > Show Console).

7.3 Export Debug (Dev builds)

For support requests, use Export Debug in the toolbar (Dev builds only). This creates a folder containing:

  • The prepared context snapshot used for the last request
  • Prompt, request metadata, and settings with API keys redacted
  • Last request/response exports plus streaming diagnostics when available
  • Attachment files and attachment metadata for the relevant turn
  • UI and worker logs for the last request

Share this bundle when reporting issues or when support asks which image or request PIAdvisor actually used.