POPal- User Story writer powered by ChatGPT
Advance Settings
POPal supports detailed customization at both the Global and Project levels. Advanced settings can be configured to tailor generation for your organization or specific projects.
Accessing Advanced Settings
Global:
Application Settings → POPal Global SettingProject:
Project Settings → POPal Setting
(Project settings override global settings if “Use Global Settings” is not enabled)
New Advanced Settings
Test Case Related (Global/Project Level)
Zephyr/Xray Integration: Enable integration (if installed) for enhanced test management.
If enabled, test cases are created in Zephyr/Xray, overriding the issue type.
Issue Type for Test Case: Choose the Jira issue type for test cases.
Automation Script Language (Global/Project Level)
Select your preferred programming language and test driver for automation script generation (e.g., JavaScript + Selenium, Python + Playwright).
General Advanced Options (Project Level Only)
Additional Prompt (Generation Screen)
Allows users to add an extra prompt only for the item being generated (User Story or Test Case). This is applied on top of the standard prompt used by POPal, but it affects only the current generation.
Example: While generating a user story, you can add:
“Please generate acceptance criteria in Gherkin (BDD) format using Given–When–Then.”
→ This format will be applied only to that specific user story.
Allow Issue as Context
Lets users select an existing Jira issue (Epic or User Story) as extra context during generation. The selected issue’s content is used as supplemental knowledge to guide backlog generation.
This provides richer knowledge but does not define scope of the requirement.
Allow Issue as Example
Enables users to pick an existing Epic, User Story, or Test Case as a template/example. The generated output will follow the format and structure of the selected item.
Allow Epic/User Story Generation Across Projects
Allows users to select a target project in the Generate screen. Once user stories are generated, they can be assigned to specific projects before saving. This supports distributing user stories from one Epic into multiple projects.
Generate Test Data
Creates realistic test data to accompany generated test cases automatically.
Project Context
Provides additional project-specific knowledge to the LLM so that generated backlog items are more accurate and relevant.
Context can include navigation details, sample product data, customer information, or other project specifics.
Like “Allow Issue as Context,” this enriches knowledge but does not act as scope.
Project-Level Additional Prompt
This is a persistent prompt that applies across the project. It is always combined with the standard prompt and ensures consistent generation for all User Stories and Test Cases within the project.
For User Stories from Epics: The project-level prompt is applied whenever user stories are generated from an Epic. Example: “Please generate acceptance criteria in Gherkin (BDD) format using Given–When–Then.”. This will be applied to all the user stories generated from Epics within the project.
For Test Cases from User Stories: The project-level prompt is applied whenever test cases are generated for this project. Example: “Always use navigation information provided in Project Context when generating test cases.”.This will be applied to all the user stories generated from Epics within the project.
Example Use Cases
Use an existing issue as context or template for new items.
Distribute user stories from one Epic to multiple projects.
Control review score scales or other constraints via prompts.
Generate precise test data for complex projects.