17. Automated Testing
Automated testing is a crucial part of the modern SAPUI5/Fiori application development process. It allows you to:
- Quickly catch errors early,
- Simplify refactoring,
- Guarantee stability when updating dependencies and the platform,
- Save time on manual testing.
Types of tests
1. Unit tests
Test individual functions, classes, utilities.
Recommended tools:
Best practice:
- Move business logic out of controllers into separate services/utilities — they are easier to test.
- Don’t test the UI5 framework, test your own logic.
2. Integration tests
Test interaction between modules, working with models, services, routing.
Best practice:
- Use QUnit for integration scenarios (e.g., data loading, error handling).
- Mock external services (OData, REST) via sinon.js or QUnit’s built-in features.
3. UI/End-to-End tests
Test the application “through the eyes of the user”: clicks, data entry, navigation between screens.
Recommended tools:
Best practice:
- Cover main user scenarios (login, create/edit/delete entity, navigation) with smoke tests.
- Don’t try to cover everything with UI tests — they are expensive to maintain, focus on critical business processes.
How to organize tests in a project
- Put all tests (unit, integration, opa) in the
/webapp/testor/testfolder. - For QUnit/OPA5 use the standard structure:
/webapp /test /unit /integration /opa - For wdi5 — a separate
/e2efolder in the root.
CI/CD integration
- Run tests automatically on every commit/merge (e.g., via GitHub Actions, Jenkins, GitLab CI).
- For QUnit/OPA5 use headless browsers (Chrome Headless, Puppeteer).
- For wdi5 — set up to run in a Docker container or on a dedicated runner.
Tips and best practices
- Write testable code: move business logic out of controllers, avoid hard dependencies.
- Mock all externals: don’t rely on real services/backends in unit/integration tests.
- Test not only the happy path: test errors, empty responses, edge cases.
- Document scenarios: describe what each test checks.
- Don’t forget linters and static analysis — this is also part of quality automation (see "Linters").
When tests are necessary and when not (with examples)
When tests are mandatory
-
Critical business logic
- Example: calculating the final order amount with discounts, taxes, currencies.
- Why: an error can lead to financial loss or legal issues.
- How to test: unit tests for calculation functions, integration tests for data model interaction.
-
Complex user scenarios
- Example: multi-step wizard for document creation, with checks and branches at each step.
- Why: easy to break during changes, hard to manually check all variants.
- How to test: e2e tests (OPA5/wdi5), unit tests for individual steps.
-
Migrations, refactoring, dependency updates
- Example: migrating from UI5 1.71 to 1.120, mass renaming of models or services.
- Why: high risk of breaking something in different parts of the app.
- How to test: run the full set of unit and e2e tests after changes.
-
Reusable components used everywhere
- Example: a common date picker used on 20+ screens.
- Why: a bug in the component affects the whole app.
- How to test: unit tests for the component, integration tests for typical scenarios.
-
Integration with external services
- Example: payment gateway integration, sending data to SAP backend.
- Why: errors can lead to data loss or incorrect business processes.
- How to test: mock external services, unit and integration tests.
When tests are not mandatory (but still useful)
-
Temporary prototypes, MVPs
- Example: quick prototype for a customer demo, not for production.
- Why: code will be thrown away, no point in spending time on tests.
-
Minor visual tweaks
- Example: changed button color, adjusted spacing, changed screen text.
- Why: such changes are easy to check visually, automation won’t pay off.
-
One-off scripts/utilities
- Example: data migration script run once manually.
- Why: easier to check manually than write tests for code that won’t be reused.
-
Very simple components
- Example: component that just displays static text or an icon.
- Why: minimal chance of error, tests add little value.
Edge cases
- If a component/function may become critical in the future — better to add tests right away.
- If the team changes often and the project is long-lived — tests save time on onboarding and maintenance.
- If you have CI/CD — even minimal test coverage helps catch regressions automatically.
Useful links
Summary:
Tests are a risk management tool. The higher the cost of an error, the more automation you need. But don’t aim for 100% coverage at any cost — a reasonable balance between effort and benefit is always more important.