During your software testing career, you are likely to have come across testing a lot of charts and reports. I have seen testers struggling to design efficient test cases while testing reports and charts.
Testing reports can be tricky as you are handling a lot of real-life data. Having an extensive knowledge of the domain and the data is very important. In this article, we will discuss some common test cases for charts and reports.
15 Common Report Testing areas
Values plotted on charts should match the source data
Verifying the correctness of data is of utmost importance. To do this, you will have to cross check the data you see in UI with that in the original source. The source might be a CSV or Excel file or a database. For example, if you are checking a graph you need to check the values plotted on each axis against the numbers in the database.
Links on charts and their navigation
There may be a lot of hyperlinks on complex charts. Clicking on these links should take the user to the appropriate pages or reports. There should be back buttons (in the reports itself, if possible) to allow the user to go back to the parent reports. Browser back buttons should also behave the same way. When the user navigates back to the parent report, they should see the same data they viewed the last time.
Data changes on changing filters
Complex reports will have many dropdowns and checkboxes where users can view different combinations of data. You need to make sure that for every possible combination, the data properly loads on the report.
Data exportability and how they look on the exported files
It’s important that the reports and charts you view on web pages should be exportable to PDF, Excel or CSV, as business users may need to take them for business presentations. For some reports, it may even be possible to export them as PPTX files for these presentations. If so, you will need to test the exportability of these reports and most importantly, how these exported reports render on the downloaded file.
The color and design of reports
The color scheme of the reports should complement the overall design of the system. Color schemes used should also give a ‘feel good’ impression to the user as good looking reports improve the system’s UX attributes.
Tooltips and help texts
Tooltips and help texts should be used in a report wherever possible so they can give sufficient information to the user about the report data and the parameters used. This is a very important usability aspect while testing reports.
Use of legends
In graphs and charts, the use of legends will give users a clear indication of what each drawing is used for. So checking for the correct use of legends should also be one of the usability checks.
Dynamic plotting range
The plotting range on the graphs should be dynamic i.e., as the amount of data increases, the plotting range should dynamically adapt in order to accommodate the whole range of data.
Checking reports for huge amount of data
You will also need to test how the report UI handles as the amount of data increases. At times, you will see the reports break or give undesired results when huge data is loaded on the UI. So testing the reliability and consistency of the reports should also be in your checklist.
Testing boundary values in reports
When testing graphs, you will need to test the boundary value conditions as well on the graph. For example, for a value of 10 in the X axis, you will be cross checking the corresponding Y axis value with the table data. But the table data might not have the data for the boundary values surrounding 10, such as 9.99 and 10.01 (considering only two decimal points). So you will need to test how the output data behaves for these boundary values.
Testing loading time for different amount of data
Testing the performance of the report pages is very important as the complexity of data can create a number of performance issues. Performance can be tested using performance & load testing tools or through manual verification.
Equivalence class partitioning testing
Equivalence class partitioning is a testing technique in which you classify the range of data in different classes, according to the test conditions so that you can avoid testing every data point. This technique is very helpful when you want to test a huge data range containing different conditions.
Responsiveness of report design
The reports should be viewed on all supported browsers and platforms so you can visually validate the data and design across them all. Testing can be carried out on different browsers (such as Chrome, IE, Safari etc.) and different devices (such as mobile, tablets, desktops). You may need to test on different screen sizes as well. Use a cross browser/platform testing tool such as Browserling to meet the cross platform testing requirements.
No data available messages
If data is unavailable for a particular report, a proper message should be shown. You cannot show a blank screen to the user when data has not yet become available.
This is a very important test case. Users expect to see a loading indicator while the data is being loaded on a report. If loading indicators are missing, this may lead the users into reviewing incomplete or wrong data. Loading indicators will also stop user from doing any actions on the report while data is still loading.