SOLID Principles in Test Automation: From Theory to Practical Framework Design
If you've been building UI or API automation for a while, you've probably seen a "small" test suite grow into a tangled web of brittle tests, god classes, and mysterious failures. At some point, every change feels risky and adding a new test takes longer than it should.
This is exactly where SOLID principles in test automation can save you. Originally defined for object-oriented design, SOLID gives you a set of guidelines to make your automation code easier to understand, extend, and maintain over time.(Wikipedia)
For SDETs and automation engineers, that means fewer flaky tests, faster feedback to the team, and a framework that can evolve with the product instead of fighting against it. In this article, we'll walk through each SOLID principle with concrete automation examples in Java, bad vs. good patterns, and practical tips you can apply right away.
What Are SOLID Principles in Test Automation?
SOLID is an acronym for five object-oriented design principles:
- S – Single Responsibility Principle (SRP)
- O – Open/Closed Principle (OCP)
- L – Liskov Substitution Principle (LSP)
- I – Interface Segregation Principle (ISP)
- D – Dependency Inversion Principle (DIP)(Wikipedia)
The original goal of these principles is to reduce unnecessary dependencies so you can change one part of the software without accidentally breaking another, while keeping the design flexible and easy to extend.(thepowermba.com)
When you apply SOLID principles in test automation:
- Your Page Objects and API clients are easier to read and reuse.
- Test classes focus on verifying behaviour, not doing setup gymnastics.
- You can introduce new browsers, environments or data sources without refactoring the whole framework.
- Your test suite becomes less flaky and more predictable.
Industry experience shows that automation frameworks designed with SOLID in mind stay scalable and maintainable much longer than "quick and dirty" solutions.(Medium)
How Each SOLID Principle Maps to Test Automation
Single Responsibility Principle (SRP)
- Meaning: Each class or test has one clear reason to change (one responsibility).
- Typical uses: One Page Object per page, one feature per test class, one concern per helper.
- If ignored: "God" test classes, giant helpers doing everything, hard-to-follow flows.
Open/Closed Principle (OCP)
- Meaning: You extend behaviour by adding classes/config, not constantly editing core framework code.
- Typical uses: Add new browsers, environments, or API versions via new implementations.
- If ignored: Every change touches shared base classes and introduces regressions.
Liskov Substitution Principle (LSP)
- Meaning: You can swap one implementation (e.g., Web vs Mobile page) without changing test logic.
- Typical uses: Common interfaces for pages/services reused across platforms.
- If ignored: Code duplication per platform, tests tied to concrete classes.
Interface Segregation Principle (ISP)
- Meaning: Clients (tests) depend on small, focused interfaces, not huge "kitchen sink" ones.
- Typical uses:
LoginActions,CartActions,SearchActionsinstead of one giantPageActions. - If ignored: Fat interfaces nobody fully understands, accidental coupling between features.
Dependency Inversion Principle (DIP)
- Meaning: High-level tests depend on abstractions (interfaces), not concrete tools. Implementations are passed in (e.g., via factories/DI).
- Typical uses: Repositories, API clients, and drivers are accessed via interfaces.
- If ignored: Tests create their own dependencies; hard to mock, hard to switch, fragile.
Keep these principles in mind as we walk through practical examples.
Applying SRP and OCP in Test Automation
We'll start with SRP and OCP, because they usually deliver the fastest wins for a messy automation suite.
Single Responsibility Principle (SRP): Keep Tests and Pages Focused
Definition recap: A class should have only one reason to change.
In automation, SRP usually means:
- One test class per feature or user story (e.g.,
LoginTests,CheckoutTests). - One Page Object per page or component (e.g.,
LoginPage,ShoppingCartPage,HeaderComponent). - One helper/utility per concern (e.g.,
JsonUtils,DbUtils,FileUtils).
Bad Example: The "God" Test Class
This is a very common anti-pattern: a single test class that does login, search, checkout and account verification all in one place.
// ❌ Bad: One test class doing everything
public class SmokeSuiteTest {
private WebDriver driver;
@BeforeEach
void setUp() {
// driver setup…
}
@Test
void userCanRegisterLoginCheckoutAndLogout() {
// Registration steps
// Login steps
// Search product
// Add to cart
// Checkout
// Logout
// All assertions here...
}
@AfterEach
void tearDown() {
driver.quit();
}
}
What's wrong here?
- One test method tries to verify multiple features.
- If login changes, this test breaks even though checkout still works.
- Failures are hard to diagnose (you don't know which step really failed).
- You can't easily run only checkout tests or only registration tests.
Good Example: Separate Responsibilities
Split the huge test into focused test classes and delegate UI operations to Page Objects.
// ✅ Better: Each class has a single responsibility
public class LoginTests {
private WebDriver driver;
private LoginPage loginPage;
@BeforeEach
void setUp() {
driver = WebDriverFactory.create();
loginPage = new LoginPage(driver);
}
@Test
void validUserCanLogin() {
loginPage.open();
loginPage.loginAs("validUser", "validPassword");
loginPage.assertUserIsLoggedIn();
}
@AfterEach
void tearDown() {
driver.quit();
}
}
public class CheckoutTests {
private WebDriver driver;
private LoginPage loginPage;
private ProductPage productPage;
private CheckoutPage checkoutPage;
@BeforeEach
void setUp() {
driver = WebDriverFactory.create();
loginPage = new LoginPage(driver);
productPage = new ProductPage(driver);
checkoutPage = new CheckoutPage(driver);
}
@Test
void userCanCheckoutSingleItem() {
loginPage.open();
loginPage.loginAs("buyer", "password");
productPage.open("SKU-123");
productPage.addToCart();
checkoutPage.open();
checkoutPage.checkoutWithDefaultAddress();
checkoutPage.assertOrderIsSuccessful();
}
@AfterEach
void tearDown() {
driver.quit();
}
}
Now:
- Each test class has one clear reason to change.
- Failures are localised (login failures show up in
LoginTests). - You scale the suite by adding new test classes, not by growing a monster class.
SRP for Helpers and Factories
Another SRP anti-pattern is a single TestUtils class that does everything: waits, DB queries, JSON handling, date conversion, etc.
A better approach:
WaitUtils– explicit waits and synchronisation logic.DbClientorUserRepository– DB access.JsonUtils– JSON serialisation/deserialisation.
Smaller, focused classes are easier to test, mock, and reuse.
Open/Closed Principle (OCP): Extend, Don't Hack
Definition recap: Software entities should be open for extension, but closed for modification.
In test automation, OCP really shines when:
- New versions of APIs are introduced.
- You add new browsers or platforms (web, mobile web, native apps).
- You need different behaviour for different environments (staging vs production) without rewriting all tests.
API Versioning Example
Imagine you have a registration endpoint /api/v1/register. Later, /api/v2/register is introduced with extra fields (e.g., dateOfBirth).
Bad Approach: Modify Existing DTO
// ❌ Bad: Modifying existing data object directly
public class RegisterRequest {
private String firstName;
private String lastName;
private String email;
private String photo;
// later someone adds:
private LocalDate dateOfBirth; // breaks v1 tests if not handled correctly
// getters/setters...
}
Problems:
- Old tests sending v1 payloads may break silently.
- You might accidentally send
dateOfBirthto the v1 endpoint and get strange failures. - It's harder to reason about which fields belong to which version.
Good Approach: Extend Without Modifying
// ✅ Good: Separate classes, reuse common fields via inheritance or composition
public class RegisterRequestV1 {
private String firstName;
private String lastName;
private String email;
private String photo;
// getters/setters...
}
public class RegisterRequestV2 extends RegisterRequestV1 {
private LocalDate dateOfBirth;
// getters/setters...
}
Now:
- v1 tests keep using
RegisterRequestV1unchanged. - v2 tests extend behaviour by using
RegisterRequestV2. - You extend the model instead of changing it and risking regressions.
OCP in Base Test Classes
A common pattern is to have a shared base test for web UI:
public abstract class UiBaseTest {
protected WebDriver driver;
@BeforeEach
void setUp() {
driver = WebDriverFactory.create();
driver.manage().window().maximize();
}
@AfterEach
void tearDown() {
if (driver != null) {
driver.quit();
}
}
}
Later, you might want a special base test for "account" functionality that loads user data from JSON:
public abstract class AccountBaseTest extends UiBaseTest {
protected User loadUser(String userType) {
return TestUserRepository.getUser(userType);
}
}
Instead of editing UiBaseTest every time a team has a new need, you extend it to add behaviour. This follows OCP and reduces the chance of breaking unrelated tests.
Applying LSP, ISP and DIP in Test Automation
The last three SOLID principles—Liskov Substitution (LSP), Interface Segregation (ISP) and Dependency Inversion (DIP)—are all about abstractions and decoupling. This is where your framework starts to feel truly "pluggable".
Liskov Substitution Principle (LSP): Swappable Implementations
Definition recap: Objects of a superclass should be replaceable by objects of its subclasses without breaking correctness.
In automation, you want to be able to swap implementations (e.g. web vs mobile) without changing test logic.
Bad Example: Test Tied to a Concrete Class
// ❌ Bad: Test tied to a specific implementation
public class LoginTests {
private WebDriver driver;
private WebLoginPage loginPage; // concrete class
@BeforeEach
void setUp() {
driver = WebDriverFactory.create();
loginPage = new WebLoginPage(driver);
}
@Test
void userCanLogin() {
loginPage.open();
loginPage.login("user", "password");
loginPage.assertLoggedIn();
}
}
If you want to reuse the same test steps for mobile web or native app, you now need to duplicate tests or heavily refactor them.
Good Example: Test Depends on an Abstraction
// Abstraction
public interface LoginPage {
void open();
void login(String username, String password);
void assertLoggedIn();
}
// Web implementation
public class WebLoginPage implements LoginPage {
private final WebDriver driver;
public WebLoginPage(WebDriver driver) {
this.driver = driver;
}
@Override
public void open() {
driver.get("https://example.com/login");
}
@Override
public void login(String username, String password) {
// locate fields, fill, submit
}
@Override
public void assertLoggedIn() {
// assertion for web app
}
}
// Mobile implementation (example)
public class MobileLoginPage implements LoginPage {
private final AppiumDriver<?> driver;
public MobileLoginPage(AppiumDriver<?> driver) {
this.driver = driver;
}
@Override
public void open() {
// navigate to login screen in mobile app
}
@Override
public void login(String username, String password) {
// mobile-specific locators/actions
}
@Override
public void assertLoggedIn() {
// assertion for mobile app
}
}
// Test depends on LoginPage interface, not a concrete class
public class LoginTests {
private LoginPage loginPage;
@BeforeEach
void setUp() {
loginPage = PageFactory.createLoginPage(); // decides web/mobile
}
@Test
void userCanLogin() {
loginPage.open();
loginPage.login("user", "password");
loginPage.assertLoggedIn();
}
}
Now you can swap implementations (web, mobile, different brands) without touching the test code.
Interface Segregation Principle (ISP): Slim Contracts
Definition recap: Clients should not be forced to depend on interfaces they do not use.(Wikipedia)
In test automation, "clients" are usually:
- Your test classes
- Other Page Objects or services that use an interface
Bad Example: Fat Interface
// ❌ Bad: Interface doing too much
public interface PageActions {
void open();
void login(String username, String password);
void addItemToCart(String sku);
void removeItemFromCart(String sku);
void checkout();
void applyCoupon(String code);
void search(String query);
void updateProfile(String name, String email);
// ... and many more
}
Every class implementing this has to provide meaningless implementations for methods it doesn't care about (e.g., a LoginPage implementing cart methods).
Good Example: Split by Behaviour
public interface NavigablePage {
void open();
}
public interface LoginActions {
void login(String username, String password);
}
public interface CartActions {
void addItemToCart(String sku);
void removeItemFromCart(String sku);
}
public class LoginPage implements NavigablePage, LoginActions {
@Override
public void open() {
// open login URL
}
@Override
public void login(String username, String password) {
// fill username, password, click login
}
}
public class ShoppingCartPage implements NavigablePage, CartActions {
@Override
public void open() {
// open cart URL
}
@Override
public void addItemToCart(String sku) {
// implementation
}
@Override
public void removeItemFromCart(String sku) {
// implementation
}
}
Tests now depend only on the behaviours they need, making code easier to understand, mock and reuse.
Dependency Inversion Principle (DIP): Decouple from Infrastructure
Definition recap: High-level modules should not depend on low-level modules. Both should depend on abstractions.
This is crucial for test automation, where your code often touches:
- Browsers (WebDriver)
- Databases
- Message queues
- Third-party APIs
You want your tests and business-level components to depend on interfaces, not concrete tools, so you can switch or mock them easily.(Wikipedia)
Bad Example: Test Creates Its Own Dependencies
// ❌ Bad: Test creates concrete dependencies directly
public class OrderHistoryTests {
private WebDriver driver;
private RealDatabaseClient dbClient;
@BeforeEach
void setUp() {
driver = new ChromeDriver();
dbClient = new RealDatabaseClient("jdbc://prod-db");
}
@Test
void userSeesLatestOrderInHistory() {
// use dbClient to prepare data
// use driver to open UI and validate
}
}
Issues:
- Hard to run against different browsers or temporary DBs.
- No way to replace
RealDatabaseClientwith a mock in tests. - Tests become slow and flaky when external dependencies are unstable.
Good Example: Depend on Abstractions, Inject Implementations
// Abstraction for DB
public interface OrderRepository {
void createOrder(Order order);
List<Order> findOrdersForUser(String userId);
}
// Real implementation
public class SqlOrderRepository implements OrderRepository {
private final DataSource dataSource;
public SqlOrderRepository(DataSource dataSource) {
this.dataSource = dataSource;
}
@Override
public void createOrder(Order order) {
// insert into database
}
@Override
public List<Order> findOrdersForUser(String userId) {
// query database
return List.of();
}
}
// Test base that receives dependencies
public abstract class UiOrderBaseTest {
protected WebDriver driver;
protected OrderRepository orderRepository;
@BeforeEach
void setUp() {
driver = WebDriverFactory.create(); // decides Chrome/Firefox, local/remote
orderRepository = RepositoryFactory.createOrders(); // decides real/mock/remote
}
@AfterEach
void tearDown() {
driver.quit();
}
}
public class OrderHistoryTests extends UiOrderBaseTest {
private OrderHistoryPage orderHistoryPage;
@BeforeEach
void initPage() {
orderHistoryPage = new OrderHistoryPage(driver);
}
@Test
void userSeesLatestOrderInHistory() {
Order order = TestDataFactory.createRandomOrder();
orderRepository.createOrder(order);
orderHistoryPage.openForUser(order.getUserId());
orderHistoryPage.assertOrderVisible(order);
}
}
Now:
- You can plug in a mock repository for fast, isolated tests.
- You can run tests against different DBs or environments via configuration.
- High-level test logic remains untouched when infrastructure changes. Applying SOLID this way leads to cleaner, more robust automation code that evolves easily with your project.(verisoftacademy.ussl.co.il)
Mini Case Study: From Fragile to Flexible
Imagine a legacy checkout module where:
- Tests instantiate
ChromeDriverdirectly. - Data comes from hard-coded SQL inside test methods.
- A single
CheckoutTestclass covers login, search, cart, and payment.
By applying SRP + DIP:
- You split flows into
LoginTests,CartTests,CheckoutTests. - You introduce
UserRepositoryandOrderRepositoryinterfaces with real and mock implementations. - You centralise driver creation in
WebDriverFactory.
Result: tests become faster to read, easier to isolate, and far less likely to break when you switch to a grid, add Firefox, or point to a new DB.
Best Practices Summary (Checklist)
Use this checklist to bring SOLID principles into your test automation framework:
-
Keep classes focused (SRP): One page, one component, one feature, one concern per class. Avoid "god" classes and giant utility bags.
-
One behaviour per test: Each test method should verify a single behaviour or scenario, not a full end-to-end tour of the app.
-
Extend, don't edit (OCP): When adding a new environment, API version, or platform, create new classes/config instead of constantly modifying existing ones.
-
Depend on interfaces, not concrete classes (LSP + DIP): Design your Page Objects, repositories and services with interfaces; let factories/config choose implementations.
-
Split fat interfaces (ISP): If an interface has too many responsibilities, split it into smaller ones (e.g.,
LoginActions,CartActions,SearchActions). -
Use factories for infrastructure: Centralise WebDriver, repositories, and API clients creation in factories so tests don't know about low-level details.
-
Prefer composition over deep inheritance: Share common behaviours via composed helper classes when inheritance chains become deep or confusing.
-
Align names with responsibilities: Class and method names should clearly express the single responsibility (e.g.,
CheckoutPage,UserApiClient). -
Continuously refactor: Treat your test code as production code. Refactor when you spot duplication, long methods, and mixed responsibilities.
-
Review tests for design, not just assertions: Code reviews should check design quality (SOLID compliance), not only locators and assertions.
FAQ
1. Do I really need all five SOLID principles in my test automation framework?
Not immediately. Start with SRP and OCP because they give the fastest payoff: simpler tests and safer extensions. As your framework grows, LSP, ISP and DIP become essential to keep it flexible and decoupled.
2. How do SOLID principles reduce flaky tests?
Flakiness often comes from hidden coupling and shared state. When each class has a clear responsibility and dependencies are explicit (via DIP), it's easier to isolate what a test is doing, control timing and data, and avoid side effects between tests.
3. Isn't SOLID overkill for a small test suite?
For a tiny project with a handful of tests, you can get away with simpler code. But test suites tend to grow fast. Designing with SOLID from the beginning makes future growth smoother and avoids painful rewrites when "small" becomes "critical".
4. How can I introduce SOLID principles into an existing legacy framework?
Pick one pain point (e.g., an unstable module or a huge god class) and refactor it gradually:
- Extract clearer interfaces.
- Move helper logic into focused classes.
- Replace direct dependency creation with factories or DI.
- Add tests for your framework code itself to protect refactors.
Iterate feature by feature instead of trying to rewrite everything at once.
5. Which tools help with SOLID in test automation?
The principles themselves are tool-agnostic, but you can make implementation easier with:
- Dependency injection frameworks (Spring, Guice, etc.).
- Build tools & runners (Maven/Gradle + JUnit/TestNG) to structure modules.
- Static analysis tools that flag god classes, long methods, or cyclic dependencies.
The key is design, not the tool—tools just support the approach.
Conclusion
Applying SOLID principles in test automation isn't about making your framework "academic" or over-engineered. It's about writing automation code that you and your team will still be happy to work with six months from now.
By keeping classes focused, extending behaviour instead of constantly modifying it, relying on abstractions, and splitting large interfaces into smaller ones, you create a framework that:
- Is easier to understand and onboard new team members to.
- Can adapt quickly to new features, platforms and environments.
- Produces more stable, less flaky tests, giving faster feedback to your developers.
Pick one area of your current suite—maybe a messy Page Object or a huge test class—and refactor it using one SOLID principle from this article. Once you see the improvement, you'll naturally start applying the same thinking across the rest of your automation framework.
Sensei Omar Alaa is a Senior QA & Test Automation Engineer with 4+ years of experience in the fintech domain, specialized in Java, Selenium, BDD, TestNG, and API testing, with strong experience leading testing teams and ensuring high-quality releases.
His journey started in manual testing, then he moved deeper into automation—building and enhancing Selenium-based frameworks, and even applying OCR to automate CAPTCHA within regression workflows.
After mentoring and training testers, Omar founded Quality Sensei to deliver practical, structured testing education through hands-on labs and real-world scenarios.
Areas of Expertise:
- Selenium WebDriver (Java) & Automation Frameworks
- BDD, Test Design, and Release Sign-off Quality
- API Testing (Postman, Rest Assured)
- Performance Testing (JMeter – basic)
- Team Leadership, Mentorship & QA Process Improvement