The following is a reference manual for the latest version of the AI Explorer for SketchUp extension. It is assumed that you have successfully installed this extension (instructions are on the main extension page).
On This Page…
Basic Extension Operations and Use Cases
These are described on the main extension page.
Extension Options

You can find the Options panel by clicking on the Options button in the AI Explorer dialog. This panel provides several options to customize this extension. Edit those as needed.
Run Options:
System Message: This text is being sent to the AI service’s API as part of the system message portion of a request. This field can contain any set of instructions that tell the AI what to do with the actual request (and what not to do). Feel free to experiment with different system messages such as:
- “Generate only valid, self-contained SketchUp Ruby code without any method definitions.”
- “Generate only SketchUp Ruby.”
- “Respond in Shakespearean English.”
- You can leave this empty if you just want to send the user request, e.g. for plain chatbot behavior.
Execute Code: By default this is set to not execute code, but if you want to automatically execute the generated code, then you can change that here. Two safety features have been implemented when this option is set to yes:
- Any request that contains a destructive word (“delete”, “erase”,…) will generate an additional user prompt before any code is executed.
- A system message is added that aims to prevent code generation for any requests that involve file system access.
Submit Model View with Request: Most current chat completion models can accept multimodal (e.g. image) uploads. If you use such a model, you can turn this on. This will then upload the SketchUp model view each time you enter a prompt. You can then use this to ask questions like the ones listed below. See this post for more information about this.
- “How can I improve this model?”
- “Critique my design in the context of contemporary architecture.”
- “Is this model 3d printable?”
TIP:
The System Message, Execute code, and Submit model view with request settings are all changed when the user selects a different use case in the main part of the dialog. They can all, however, be edited manually anytime.
Temperature: This parameter controls the “creativity” in the answer and therefore is more deterministic at zero and more variable (and possibly less reliable) at higher values. Depending on the model, the maximum can be 1.0 or 2.0.
Max. Tokens: This number controls the length (and thereby potential cost) of the response. It is likely good to keep this number low unless you need it higher. The extension will output the exact number of tokens used for each request, which allows you to adjust this limit as needed.
Model View Submission Quality: You can decide on the image quality level you want to use here. Quality impacts image size and therefore token count (and cost).
Submit # of Prompts: If this is set to 1, then each prompt gets answered on its own (i.e. without the AI “remembering” previous prompts). You may want to increase this number if you want to ask follow-up questions. Keep in mind, however, that a higher number will result in more tokens being used (i.e. a higher cost).
AI Service Options:
AI (Chat Completion) Model: This is preset for a current, well-working OpenAI model (gpt-4.1-mini) but you can enter a different model than the default here anytime. A list of valid models can be found in the OpenAI API documentation or at the various other providers (see links in the next section). I have also discussed some of those in various blog posts.
API Key: Enter your API key here, exactly as provided by your service. You can get your API key here: OpenAI, Google, or Anthropic.
AI Service API Endpoint URL: This is preset to OpenAI’s endpoint at https://api.openai.com/v1/chat/completions. There is no need to change this unless you are switching AI service providers (as mentioned in the next section).
Extension Options:
Color Mode: Use this setting to change from light to dark mode or to allow the extension to adjust the mode based on the computer’s preference.
Using Different AI Services
If you want to use a different AI service (e.g. Google’s Gemini or Anthropic’s Claude), then you need to sign up with their respective API services and use their API keys and chat completion model names. You also need to enter their API endpoint URL into the settings dialog. Following are the details for these services.
Google Gemini
- Get your API Key here and start with a model like
gemini-2.5-flash. Also, change the API endpoint URL tohttps://generativelanguage.googleapis.com/v1beta/openai/chat/completions - API Reference
Anthropic Claude
- Get your API Key here and start with a model like
claude-3-5-haiku-latest. Also, change the API endpoint URL tohttps://api.anthropic.com/v1/chat/completions - API Reference
DeepSeek
- Get your API Key here and start with a model like
deepseek-chat. Also, change the API endpoint URL tohttps://api.deepseek.com/chat/completions. - API Reference
If you want to find more services, look for “OpenAI API compatibility” in their help documentation and use the details you find there.
A Word About Safety Guardrails
Beyond any safety guardrails that the various AI service providers (OpenAI, Google, Anthropic,…) have implemented with their respective models, this extension adds the following as well behind the scenes when code execution is enabled:
- If a “destructive” (English) keyword like “delete” is used in the prompt, then the user is asked before code execution whether they really want to execute the code.
- An instruction is added to the system prompt that asks the AI service not to respond if the prompt asks about the local file system.
Please note, however, that neither of these two are foolproof. It is therefore best to use caution when automatic code execution is enabled.
Editing Default System Messages
The four use cases for this extension have four associated system messages that get pasted to the Options panel when the use case gets switched. Those are:
{
"chat": "Respond within the context of the SketchUp software.",
"chat_vision": "Respond within the context of the SketchUp model shown in the image.",
"ruby_code": "You are a SketchUp Ruby coding helper. Generate valid SketchUp Ruby code and explain it briefly.",
"execute_ruby": "Generate only valid, brief and self-contained SketchUp Ruby code without any methods. Add a short explanation to the response."
}
These are defined in a file called system_msgs.json which is located in this extension’s folder /as_openaiexplorer/ which can be found (on Windows) at C:/Users/<username>/AppData/Roaming/SketchUp/SketchUp <version>/SketchUp/Plugins/. If you like, you can edit this file and provide different default system messages.
Troubleshooting
You can reset this extension by pasting the following code into SketchUp’s Ruby Console (which can be found under the Extensions menu) and executing it (hit Enter):
Sketchup.write_default( "as_openaiexplorer" , "openai_warning" , nil )Sketchup.write_default( "as_openaiexplorer" , "openai_explorer_settings" , nil )Sketchup.write_default( "as_openaiexplorer" , "openai_explorer" , nil )Sketchup.write_default( "as_openaiexplorer" , "disclaimer_acknowledged" , nil )
Alternatively, you can do this also with the menu item Extensions > AI Explorer (Experimental) > Reset extension settings.