Date:
8 February 2019
Author:
Ivan Grynenko

Building a fast and functional Drupal website

Having a website that loads and displays quickly is an important feature of any website and something our clients are always focused on.

It’s important to understand that performance is both an art and a science, as there are several elements that influence the speed of a user’s experience. For example:

  1. Infrastructure — the most common perception of poor performance is inadequately sized infrastructure. Appropriate sizing is, of course, important but not always necessary and often overcompensates for other negative influencing factors (such as the items below).

  2. Architecture — poor system architecture, misuse, or abuse of custom modules can cause poor performance.

  3. Implementation — poor implementation (or lack of best practices) can contribute to poor performance.

  4. Data — poorly indexed data, poorly architectured data or unmanaged high volumes of data can lead to poor performance.

  5. Implementation that fits infrastructure — a good infrastructure often offers high performance features and imposes some limitations on the implementation to enable those features. When the limitations are accounted for in the website implementation, the infrastructure is able to deliver it at best speeds. For example, Varnish cache, Expiration headers and Drupal cache require specific practices to be followed when developing the site or using custom modules, to ensure those high performance features work as expected.

The process

The process involved in understanding the site’s needs in terms of speed is:

  1. Understand functionality

  2. Understand responsive needs and expectations

  3. Understand volumetrics

  4. Define a performance SLA (if/where relevant)

  5. Size infrastructure

  6. Design, build, execute and measure performance and/or stress tests

  7. Baseline and benchmark agreed performance metrics

Drupal best practice

It’s essential to adopt best practices in Drupal configuration and development to ensure the best performance possible. Common areas where best practice must be implemented:

  1. Caching — use Memcached or Redis.

  2. Content delivery network (CDN) — Akamai, Fastly, CloudFront or CloudFlare.

  3. Coding — strictly follow Drupal coding standard and best practices. Automated code quality checking tools should be in use during any development or security patching process.

  4. Site building — ensure limitations for image size are in place, no anonymous sessions are in use, and no custom cookies are affecting the site caching.

  5. Provide code performance analysis to outline any performance bottlenecks that may exist in the custom code.

  6. Provide frontend performance analysis and optimisation to ensure page size is reasonable for desktops and mobile devices.

  7. Cookies — ensure no session cookies are used for anonymous visitors.

  8. Monitoring — implement automated monitoring to track basic roundtrip response time.

  9. Tooling — use tools like WebPageTest, PageSpeed Insights and similar to identify and resolve performance bottlenecks in page loading speeds.

Balancing cost and performance

Establishing a balance between effort (cost) and performance also needs to be considered. Establishing performance SLAs and running performance tests can be quite involved and thus cost unnecessary time and money. Configuring periodic and automated tests to reported problems via email should can be considered as an alternative to manually executed performance tests.

For low traffic sites, early consideration might indicate that a site’s functional needs are relatively straightforward. In such cases, it can be safe to assume that the site’s use will fit easily within the underlying known infrastructure profile, even under high traffic. In this case, automated monitoring may be sufficient to “keep the check” and meet your needs.

Site speed case study

The DHHS Emergency Relief site is an example of a mission critical site where solid performance and functionality under “peak” loads was critical. During bushfire season or an emergency incident, this site needs to seamlessly handle an extreme increase in user traffic. In this case, Salsa Digital needed to understand volumetrics and performance test the system against these loads to confirm the site’s expected response times. Salsa pushed the stress testing of this system to identify the “breaking point” and thus communicate accordingly with DHHS to allow management to understand and accept the risk of the site’s limitations.

Final thoughts

User experience is paramount, and speed is an important factor in a positive user experience. Keeping in mind the above will help deliver the fastest site possible.

Get the latest digital insights and Salsa news

For a roundup of the latest news and insights across digital government, web development, open data and open source please subscribe to Salsa's monthly newsletter. 

Subscribe to our newsletter