Any webpage you visit will have some form of performance delay. This can range from the web page taking a long time to load, to specific parts of the page not rendering correctly. Statistics on the importance of performance and speed are unequivocal, with warnings of the dangers of slow load time from SEO experts such as Shaun Anderson including that a two second delay in load time causes up to 87% of users to abandon a site.
So the question is, how do you stop this from happening and how do you improve the performance of web pages?
1. Code review
When any code is written, a code review should be conducted (if possible). This allows for a second pair of eyes to review the code, increasing the likelihood of spotting any major issues. Tips for successful code reviews can be found at Google, Smart Bear, and by experienced engineers such as Philipp Hauer and Gergely Orosz.
Code review practice should be applied to the codebase as well; a general review of the codebase can help spot key performance issues.
2. Review high risk areas
One of the tools I found useful for reviewing high-risk areas was Google Chrome’s dev tools. Under its performance section, select the circle arrow, wait for it to run and then select bottom-up.Here you can see timing on specific parts of code and what takes the most time to run during the page load. This is just one of the tools you can use for checking areas of code which might be causing issues. While the article by the team at Secure Development highlights the difficulty in allocating the time and resources of external experts, they recommend, if your organisation is able to access experts on secure coding, that code reviews can be highly effective in using and transferring their expertise.
3. Check server related issues
Check the server has the required amount of resources available to handle the requests. Even the likes of network latency could greatly affect the overall load time. While troubleshooting your server is daunting, check out the debugging tips by Jake Walters to get you started.
4. Continually measure the performance of sites
Useful tools for measuring general performance results are New Relic, Jmeter and Matomo. Each measure performance in a slightly different way but all are very useful for working out chokepoints or bottlenecks in performance.
Built into Matomo is a page performance section. This records the time taken between a web page making a request for the HTML and receiving it. The slight problem with this is that Matomo requires a user to navigate the site in order to record this data.
In order to solve this, we use a tool called WebdriverIO. WebdriverIO is a tool used for automation within a browser, using this I create a specific series of steps that the software will follow. This allows me to trigger the Matomo recording without having to interact with it manually. i.e navigate to this page and interact with a certain object.
5. Automate measures
Using a Jenkins job (Jenkins helps to automate the non-human part of the software development process, with continuous integration) we can run performance tests against our own sites with little human interaction. To achieve this we provide a release tag (containing the code changes we want to make), and then run the test again in Jenkins. This starts a build of a separate internal site. Once the build is completed the performance tests are run against this site and the results would be recorded. This all runs in the background and the final results are stored in GITHUB for further reporting.
More detail about the benefits, and getting help in utilising Jenkins in your automation can be found at Tutorials point and Edureka.
6. Report results
A report showing current and previous data allows us to track how each version impacts the system. This is invaluable as we can catch anything that could dampen performance much earlier in the cycle, allowing us time to work out what caused it and prevent the issue. Eric Proegler offers advice in upskilling to analyse and report on performance data, and offers a stark warning of the dangers of making the wrong decision without understanding the findings of a performance test on financial results, a brand and the viability of a company.
Your aim is for end-users to never notice any difference in the site performance from one version to another.
Any decrease in performance could have a negative impact on the user experience of the product.
Below is an example of how we would report the data from the performance results.

As processes improve so must the tools we use. As shown above we had a tool that gave us the five most common areas used. An improvement on this was to have a wider range of performance tests so that we could more accurately find any flaws on performance being introduced each release. So with our new tool we managed to go from the 5 most common areas to on average 366 new points of data to review. We can now look at specific parts added to understand why the performance has decreased and how we can improve it.

There are multiple ways of checking the overall performance of your webpage, but it really comes down to how much reporting you need, or how much time you wish to spend measuring it. The best suggestion I can give is to use a tool and check where the bottlenecks are, allowing you to work on and improve the performance. From here, you can follow up with reporting – allowing transparency between yourself and your customers.