Prompt-based experimentation FAQ
The following article details common questions regarding prompt-based experimentation (PBX).
Which LLM is used for prompt-based experimentation?
Kameleoon currently uses OpenAi o3, 4.1, and GPT 5 for its prompt-based experimentation capabilities.
Which LLM does Kameleoon use for image generation?
Kameleoon integrates with OpenAI gpt-image-1 for AI-powered image generation.
Can I use the images generated by Kameleoon AI?
Yes, all images generated by Kameleoon AI are free to use commercially. They are automatically uploaded to your Image Library within the Kameleoon platform for easy access.
Can I edit the code generated by Kameleoon?
Yes, you can edit code directly from the prompt-based interface. A new version of your variant will be automatically generated once saved.
Can I use the Graphic editor on a variant created with AI?
No, the Graphic editor cannot be used on AI-generated variants. While we understand some edge cases may benefit from the visual editor, our prompt-based experimentation is designed to handle all scenarios covered by the Graphic editor. To avoid conflicts, editing via the visual tool is currently disabled for AI-generated experiments.
Can I create any type of variant with prompts?
You can create any variant that can be managed with front-end code. However, if your prompt requires back-end logic or server-side changes, Kameleoon will not generate that code. In such cases, a developer familiar with your back-end system will need to step in.
Can I create multi-page experiments?
Yes, you can create multi-page experiments: prompting changes across multiple pages is supported. You can simply browse to the pages of your choice and apply prompts directly. Kameleoon will automatically combine the code into the same variation
However, we strongly recommend using Simulation mode to validate the complete experience and ensure that the code generated across different pages doesn't conflict.
Can I prompt changes on elements that appear on hover?
Yes, you can—as long as the HTML code for the hover-triggered element is already present in the page’s DOM. If the element is dynamically generated only after the hover interaction (for example, injected by JavaScript at runtime), Kameleoon’s AI may not be able to detect or modify it accurately.
For best results, ensure that:
- The hover element exists in the initial HTML (even if hidden).
- The structure is stable and not created asynchronously.
I've created a variation on a product page and want to test it across several product pages—how can I do that?
Once your variation has been created on the first product page, simply navigate to the next product page. This process will re-execute the AI-generated code on the new page.
Alternatively, we recommend using the Simulation mode to test the full experience across all relevant pages and ensure the variation behaves consistently.
What is the added value of PBX compared to other vibe coding tools on the market?
PBX goes beyond standard vibe coding by being specifically designed and optimized for experimentation. Its added value comes from several key aspects:
- Agentic architecture for experimentation: PBX is built to handle complex experimentation logic and transform an idea into a working variant directly on your website in minutes.
- Browser-native integration: Unlike vibe coding tools, PBX is embedded in Chrome and leverages the full dynamic content of your site—the technology framework, the design system, and the complete DOM state at a give time (not just static resources like HTML, CSS, or JS). This capability enables PBX to generate code that interacts with dynamic elements created by scripts, including React components, while preserving the site's existing functionality.
- Visual contextual intelligence with human-in-the-loop (HITL): Provides both accuracy and control. When a prompt lacks sufficient context, PBX proactively asks clarifying questions before generating the variant.
- Performance-optimized and accessible code: All generated code is designed to minimize performance impact. PBX uses the native Kameleoon JavaScript API to seamlessly manage flicker, performance concerns, and dynamic components.
Why can the same prompt lead to different outputs when tried multiple times?
Our AI is non-deterministic, meaning it doesn't produce one fixed result for a given prompt. Instead, it explores many possible ways to fulfill your request. The same prompt can therefore produce different outputs each time you run it.
This happens because:
- Controlled randomness: The model intentionally adds variation to avoid identical answers and encourage creativity.
- Prompt interpretation: Even small differences in context can lead the model to emphasize different details.
- Training diversity: The model has been trained on a wide range of examples, so it may draw from different patterns each time.
If you want more consistent results, you can add extra constraints, for example, by providing a mockup, design file, or more detailed instructions, to guide the model toward a narrower set of outcomes.
Can I create a prompt-based experiment without installing the Kameleoon script?
Yes; however, there is an additional step required
- Create an experiment as usual from the app. You’ll be redirected to the URL you selected.
- The editor will not load automatically. Use the shortcut Shift+F4 (on PC) or Fn+Shift+F4 (on Mac) to launch it.
You cannot launch the experiment until the Kameleoon snippet has been implemented.
Does prompt-based experimentation work on any website? What are the current limitations?
Prompt-based experimentation can be used on most websites, including single-page applications, but there are some limitations. Each time a prompt is submitted, Kameleoon processes the request and provides the AI with page context (such as HTML code or screenshots) to help it interpret the content. However, if this context is contained within iframes or shadowDOM, it may not be handled correctly. If you encounter this issue often, please contact to your Customer Success Manager.
Can prompt-based experimentation handle complex experiments?
Prompt-based experimentation is highly effective when dealing with:
- Text or content changes (headlines, CTAs, disclaimers)
- Style updates (colors, fonts, layout tweaks)
- Banners and content insertions
- Simple interactive changes (button repositioning, links, modals)
However, prompt-based experimentation can also be used for multi-step flows, galleries, or custom interactive elements. For advanced use cases, a hybrid approach is best: use a prompt-based experiment to generate the initial implementation, then refine it manually if required. This way you get the speed of automation with the precision of custom development.
How reliable is prompt-based experimentation’s code quality?
Prompt-based experiments generate clean, secure, responsive, and accessible code. Often it produces more robust solutions than manual coding, especially for modern websites and single-page applications (SPAs). QA best practices (cross-browser testing, visual checks) are also followed in prompt-based experiments.
The code PBX generates also respects your project-level configurations. For example:
- SPA settings and custom attributes are automatically applied.
- Exclusion rules are honored.
These considerations ensure generated code integrates seamlessly with your existing setup and follows the same rules as manually coded experiments.
Can prompt-based experimentation generate a variant by retrieving the content from a different page than the one I’m prompting from?
Short answer: no.
Prompt-based experimentation only uses the context available on the page you are currently on, meaning you cannot prompt the AI to create a feature by loading content or code from another URL—PBX does not browse other pages to retrieve additional content.
That said, if you provide PBX with access to an endpoint or web service, it can generate a variant that loads and uses this data. For example, if you want to add an urgency tooltip showing how many times a product was purchased in the last 24 hours, and you provide the endpoint in your prompt, PBX can generate the variant with the code needed to call and display that information.
PBX cannot automatically pull code or features from a different page of your site. For instance, if you want to add an “Add to Cart” CTA on a listing page, but the logic only exists on the product page, PBX cannot fetch and replicate it. However, if you provide explicit guidance, such as “call this endpoint to add the product to the cart”, PBX may be able to generate the variant correctly.
In some cases (for example, if your listing page includes a quick-view overlay), the required logic may already be present on the listing page while the overlay is open. In that scenario, prompting PBX with the quick-view displayed increases the chance of generating a fully functional variant, since the relevant code context might already be available on the page.
Can I use my own design mockup to create a variant?
Yes, you can use a design or mockup. Click the + icon > Add mockup to upload a mockup. However, the mockup alone is usually not sufficient. For best results, combine your mockup with clear, detailed instructions—just like you would when working with a developer.
Think of it this way: you wouldn't hand a developer a mockup without context or specifications. The same principle applies to PBX. The more specific details you provide, the better the output will be.
Example of a good prompt with a mockup:
- Attach your mockup/design file
- Describe what elements should change
- Specify dimensions, colors, spacing
- Explain any interactive behaviors
- Note any technical requirements
Does prompt-based experimentation work on any website? What are the current limitations?
PBX works on most websites, but there are some important considerations. If you've configured single-page application (SPA) settings in your project (see Set up an experiment on a single-page app), PBX will automatically use those settings. This includes:
- Custom attributes you've defined
- Rules to exclude specific IDs
- Any other SPA-specific configurations
My first prompt didn't generate the right output. Can I submit additional prompts to fix what's incorrect?
Yes, you can submit follow-up prompts to refine the output. However, it's important to understand how prompt history works.
What PBX remembers:
- The code it previously generated (used as context for your next prompt)
What PBX doesn't remember:
- Your previous prompt instruction
- The full conversation history
- Mockups or sketches that were added in previous prompts
Why this matters: Vague follow-up prompts like "Fixe the issue with X and Y" may not work well because PBX doesn't have the full context of what you originally asked for.
Best practices for follow-up prompts:
- Less effective: "Browse between images using arrows does not work."
- More effective: "In the carousel that was added, the browse between images using arrows feature does not work as expected. Fix it by taking into account the code that was generated with my previous prompt. The arrows should appear on the left and right sides of the carousel and advance one image at a time when clicked."
When submitting a follow-up prompt, be specific about:
- Which element or feature needs fixing.
- What the expected behavior should be.
- Reference the previously generated code explicitly.
- Include any relevant technical details.
Can I upload a file to use as an asset in my prompt-based experiment
Not directly. The Add mockup option is not designed to attach files or images for use in your variant. It's meant to help you provide visual guidance or reference mockups that assist in creating your variant, not to include assets in the final result.
If you'd like to use an image or other asset in your prompt-based experiment, you must:
- Upload the asset to the Image Library.
- Copy the image's URL from the library.
- Reference that URL directly in your prompt.
- For example, "Use this image in the popup:
[link-to-image].
- For example, "Use this image in the popup:
Your variant will then correctly display the image or asset you want to use.
The Kameleoon team is actively working on improving this flow to make it easier to include assets directly in your prompt-based experiments.
How does PBX ensure no personal or sensitive data is sent to the LLM (for example, OpenAI)?
PBX is designed to keep your data secure and compliant with industry standards—including those required by customers in regulated sectors like banking.
When you use PBX, the LLM is only invoked when you create an experiment from a webpage you control, meaning:
- No end-user data is sent to the LLM.
- The model only processes the page content visible to you (the experiment creator), not the data of any individual site visitors.
- Experiments are not generated dynamically for each user, but created once from your controlled environment.
In other words, PBX does not access or transmit personal or confidential information unless such information is manually included in the page or prompt, which should always be avoided.
The PBX workflow is designed with data security in mind, and Kameleoon is continuously improving it to maintain high standards of privacy and compliance.