Enhancing OpenMRS Performance Testing: My Contribution to Workflow Automation and Load Testing.

I’m thrilled to share my recent contributions to the OpenMRS project, particularly focusing on performance testing. OpenMRS, an open-source software platform that provides a customizable, scalable electronic medical record system, has been a significant player in healthcare technology. My recent contributions aim to improve the reliability of performance testing workflows and enhance load testing coverage.
In this article, I’ll walk you through the two pull requests (PRs) I recently merged into the OpenMRS performance test repository: one related to the automation of test report generation and the other focused on improving test coverage for allergy recording during patient visits.
PR 1: Enhancing Load Test Coverage — “Add Allergy Recording to Patient Visit Scenario (load test coverage)”
Load testing is a critical part of ensuring that an application can scale under heavy usage and maintain consistent performance. In this second PR, I focused on adding load test coverage for a user scenario in OpenMRS: allergy recording during a patient visit. This is a key function in the patient management flow, and ensuring it works under stress is essential for delivering a reliable product.
The objective was to simulate the load of multiple users simultaneously recording allergies for patients and verify that the system could handle this scenario without significant degradation in performance. Here’s the link to the PR: PR #58.
What I Changed:
- Created and added a new load test scenario that simulates the process of recording allergies during patient visits.
- Incorporated parameters such as the number of simultaneous users and the frequency of actions to stress-test the system under realistic conditions.
Why It Matters:
- Comprehensive Load Testing: By including the allergy recording scenario, I expanded the test coverage to include a key patient interaction that is likely to experience significant usage in real-world conditions.
- Better Scalability Insights: With this added test, the team now has a clearer picture of how the system behaves when handling multiple patients’ allergy records, providing actionable insights into areas that might need optimization.
This enhancement not only strengthens the test suite but also provides the team with more data to fine-tune performance and improve scalability.
PR 2: Automating Performance Test Reports — “Update scheduled performance test workflow to push reports regardless of test failures”
In the realm of continuous integration (CI) and testing, automation plays a key role in ensuring reliability and efficiency. One of the challenges I encountered while working with OpenMRS’s performance testing workflows was the lack of automated reporting when a test failed. Test reports weren’t being pushed if the test itself encountered an issue, making it difficult to track and analyze performance metrics.
To address this, I proposed and implemented a fix to the GitHub Actions workflow to ensure that performance test reports are pushed, regardless of whether the tests pass or fail. The main objective was to ensure that valuable test data is always made available for review and further investigation, even in the case of test failures. Here’s a link to the PR: PR #54.
What I Changed:
- Modified the existing GitHub Actions workflow to include a conditional step that pushes test reports even when a failure occurs.
- Ensured that the reports are uploaded to the designated repository or storage, allowing the team to review results promptly.
Why It Matters:
- Improved Transparency: With test reports being pushed after each run (whether successful or not), the team can always track the performance status of the system.
- Better Debugging: When a test fails, it becomes easier to pinpoint the issue by reviewing the report and making adjustments accordingly.
By automating this step, I was able to streamline the workflow and make performance tracking a more reliable and straightforward process.
Key Takeaways
These two contributions mark an important step forward in OpenMRS’s performance testing efforts. By automating report generation and expanding the test coverage to include critical use cases, I helped improve the overall efficiency and reliability of the testing process. Here are some key lessons I learned from the experience:
- Continuous Improvement: Even small changes, such as automating reporting, can have a significant impact on the workflow and development speed.
- Load Testing is Crucial: Load testing helps ensure that the system can handle real-world usage, especially for critical patient management tasks.
- Collaboration and Contribution: Contributing to open-source projects like OpenMRS not only improves the software but also fosters a sense of community and collaboration among developers and contributors.
Performance Testing Results
If you’re interested in seeing how these performance tests work and what the results look like, you can explore the ongoing performance tests for OpenMRS at this link: OpenMRS Performance Testing.
Conclusion
Contributing to open-source projects like OpenMRS has been an incredibly rewarding experience. Through these two pull requests, I’ve gained deeper insights into CI/CD, load testing, and the importance of comprehensive test coverage. I’m excited to see how these changes will help the OpenMRS community and encourage others to dive into the world of open-source contribution.