Contentstack Demonstrates 295% ROI as Part of New Go Composable Initiative295% ROI with ContentstackRead more
Back to Blogs

5 things you should know about Next.js

CS_-_Next.js_-_Blog_Post-resized.16.9.png
Dec 01, 2022 | The Contentstack Team

Share on:

Next.js is a React framework that supports high-performance web applications through a patchwork of modern tools and features. These building blocks will help you develop sleek React apps with scalable infrastructure, numerous integrations, limitless routing and a better user experience. 

To enhance development, Next.js takes the guesswork out of selecting third-party libraries by delivering front-end UIs for every occasion. If you want to take advantage of interactive layouts, there are React components that can merge static and dynamic page generation into one. 

Next.js is a React framework that improves how your business operates online:

SEO: More organic traffic is always a good thing with increased visibility in the SERPs.  

Responsive: Adapts to any device or screen size by skipping CSS selectors

Time to Market: Pre-made components save time on product testing and feedback

Versatile UX: Forget about plugins and templates when you have complete control over visual design.

As a standalone JavaScript library, React excels in performance thanks to an abundance of reusable tools. The only problem is rendering web content on the client side. With Next.js, it’s much easier to start your own project, where you no longer have to configure tools or reinvent the wheel. 

5 things to know about Next.js

1. Data fetching methods: SSG, SSR and ISR

Rendering converts JavaScript into HTML and moves web resources into the DOM. Data fetching pre-renders and loads your content during build time, with commands for server-side, static-site, client-side and incremental static regeneration. 

When you’re in a CMS, a pages directory will link each one to a dynamic route, rendering them in advance without dipping into client-side JS. This is done through either static site generation (SSG) or server-side rendering (SSR). 

SSG caches pages over a CDN by crawling your application to boost SEO rankings. This makes it a versatile page builder for marketing and blogging activities, on top of importing items into the back-end database. You just have to export the getStaticProps function to pass elements onto the client side. 

SSR detects files on the server and renders their HTML on the client page. It does wonders for user experience by allowing you to use recycled code on a high-speed connection. To fetch page data at the request time, you can call the getServerSideProps function to return the exported JSON for user dashboards or portals. 

Incremental static regeneration (ISR) differs in that it won’t render the requested page until after the user tries to visit it. The outcome is a pre-rendered version being shown to the user unless another build replaces it or the cache timeout is triggered. ISR assigns static generation to individual pages so you aren’t forced to rebuild the whole site.   

2. Build an API with dynamic routes

For those who want to develop their own API, Next.js offers a solution through API routes. How it works: A file in the pages/API directory gets treated as an endpoint deployed to the serverless function. 

The route comes with an SSG to grab your page data via a headless CMS, and Preview Mode to load a copy of your draft before it gets published. 

A great use case for API routes is processing user input, mainly for account sign-ups or forms to fill out. A POST request is then transmitted to the API route, thereby saving the fixed entry into your database. 

Here is the code for creating an API route that passes in the HTTP message and server response: 

export default function handler(req, res) {

res.status(200).json({ text: “Hello World!”});

}

Inside the brackets, you can return a JSON response with the status code 200 from the local host 3000 URL. Req and res are the handlers that receive either an incoming message or a helper function to fetch the endpoint. 

Dynamic API routes allow you to establish a REST pattern through subsequent GET requests for getting a list of posts or a media library in one go. It catches all your API paths by matching parameters to the specified queries and mobilizing them to the page. 

For external API routes, Next.js supports deploying on REST, GraphQL, and CORS headers. GraphQL provides the most use cases for uploading files and defining schemas. 

3. Features for editing and handling scripts

Enable Fast Refresh

One neat feature of Next.js is called Fast Refresh because it sends you feedback based on the edits you’ve made to a React component. 

Enabled on newer versions, Fast Refresh prompts most changes in seconds while preserving a constant state. You have several options to restrict code updates to a particular file and modify its core logic or event handlers at will. 

On non-React files, it reaches into other imported files and re-runs them with the recent edits. Fast Refresh quickly catches mistakes on your end, to correct syntax and runtime errors even when you don’t reload the app. 

The previous values are always preserved because of useState and useRef arguments. Resetting the state is done by typing // @refresh reset in the modified file, and remounting the components on every update. 

Automatic Code Splitting

Automatic code splitting bundles your application into separate pages that are accessible through different URLs, turning them into new entry points. On closer inspection, each page is partitioned into smaller chunks to eliminate dependencies and reduce the number of requests. 

The point of bundling JavaScript is to contract load times by only running the code on a page once a user lands there. It’s also called on-demand loading because it renders the resources in order of importance.

Splitting gets rid of duplicate code on repeat navigations, even though it preloads other pages a user might visit soon. This applies to shared modules or dependencies that you’ve downloaded. The final outcome is a smaller payload size on your React app.

4. Manage websites on the Jamstack architecture

Jamstack has been pivotal in the responsive website scene, ensuring they are secure and adaptable to business needs. 

It incorporates pre-rendering and decoupling of browser applications by partnering with site generators including Jekyll, Gatsby and Docsify for lightweight editing of HTML markdown on both files and templates. 

The vast majority of Jamstack sites are used to host personal blogs, B2B products, and e-commerce stores. Some of them are enterprise sites frequented by millions of users with developers working around the clock to keep servers online.  

Jamstack technologies are not only available on the client side but are also being implemented on the server side by full-stack developers. Whether it’s leveraging microservices or containers, there are functions that perform every task on your workflow checklist. 

When merged with CMS platforms, these site generators are bound to drive organic traffic to your website and attract more app users in the long run. 

5. Compiles TypeScript for a uniform rendering experience

If you’re attempting to create a TypeScript project, you won’t need to configure any built-in pages or APIs. In fact, it’s possible to clone the starter simply by calling create-next-app with a typescript flag. Then follow the directions on the command output to create a user profile. 

To send over an existing project, declare a new tsconfig.json file and save it to the root folder so it carries the default settings. Then it just boils down to toggling the compiler options or setting the file path before you insert an npm run dev to fetch the required packages and finish the setup. 

Next.js has specific types for Static Generation and SSR you can use to load web content in an asynchronous manner. If you need to migrate the app to TypeScript, import the built-in type AppProps and let it update your .js files to .tsx through the compiler. 

And lastly, the TypeScript integration supports next.config and incremental type checking to help detect any errors that pop up in your application. 

Optimize your site performance

When building a React app, specialization is the answer to remedying development costs that would exhaust your resources. 

After you adopt the Next.js framework, installing packages along with making API calls won’t be as complicated anymore. If you’re serious about perfecting the user experience, getting Next.js will optimize your site performance for better visibility on search engines.  

The system is packed with exceptional tools for importing JavaScript libraries, designing elegant themes, image rendering and much more. It has everything you need to launch a successful project on React.

Share on:

Recommended posts

Dec 29, 2022

How React works in a composable architecture

React is a JavaScript library widely frequented by web developers who plan on building composable elements for dynamic interfaces. By default, it is a declarative and flexible framework for altering web and app data without having to refresh the DOM every time. A React CMS splits the roles of designers and developers, placing them into a front-end or back-end role respectively. React is a collection of designated components used to maintain a structured front end, for performing actions like validating forms, controlling states, arranging layouts and passing in data. Described as a headless infrastructure, the three main ingredients of a React CMS are React, REST API and GraphQL. These libraries allow you to scale content across many channels and devices by eliminating codebase dependencies that would be prevalent in a traditional CMS environment.  When should you use a React CMS?A React CMS is ideal for editing the elements that users interact with, from buttons to dropdowns on your website. And for organizing larger projects, complex code logic is grouped by matching patterns to help you track the state of apps. It will update your source code in the DOM to reflect changes in app requirements so the content gets delivered without any compatibility issues. This is achieved by tracking the modified versions of your components to back up your data before the system restarts. If you prefer something more substantial than drag-and-drop customization, then you should consider getting a React CMS to access native API configurations and code blocks that are fully decoupled from the presentation layer. This will save you time on having to manually update plugins or extensions, so you can divert resources to creating and deploying the app through its API-based integrations. Moreover, a React CMS has been shown to improve performance by allocating less memory to track component changes. To get around loading delays, it will use the virtual DOM to render only the assets that are found at the URL. Instead of receiving just the themes and templates, you have complete control over the content layout for fetching chunks of data from API calls to populate your web pages with the desired elements. How a React CMS works with APIs to distribute contentWhen React is combined with a CMS, it lets you preview the output of workflows before you publish them onto a page. A React CMS is able to transmit on-demand data between the client and server or even during compilation time, to dictate how objects are rendered on a webpage. Using a composable model, you can call the API to request information from root directories or databases on the server, dividing your website functionality into closed environments that will not interfere with each other. From a technical standpoint, React CMSes make it possible to edit visual elements through your site’s HTML, by tying it back to the schema of GraphQL as you fill in the fields or toggle the settings. It’s also great for patching bugs in your JS bundles that might otherwise lead to delayed page interactions or even downtime on the server. Rather than create a project from scratch, the composable architecture makes it easy to reuse content over multiple channels. In addition, you can search for third-party integrations on the market to help you build streamlined apps that contribute to the overall React ecosystem. As such, swapping out components is the way to go when your team is pressed for time on the next feature release. By employing API-first methods, you won’t have to monitor CMS servers or databases in messy clusters, unlike what happens in traditional CMS solutions. What are the benefits and features of a React CMS?A React CMS ensures the continuous operation of components on the app, giving you composable options to import modules that perform what you need on the client. Once you understand the fundamental components, it becomes easier to develop and maintain web apps by leveraging just the required functionality to deliver consistent user interactions. To manage your databases, it utilizes GraphQL to recover queries from the app data in a structured format. As a substitute for REST, GraphQL caches and updates databases with new entries, thereby combining them with Apollo or Axios packages to execute your API requests. Another aspect is the custom editing experience, which generates dynamic content in an organized manner, so you can avoid a conflict of interest when loading HTML and JSON files in succession. If you’re looking for a specific feature to implement, such as a login page or shopping cart to enhance the user experience, you can learn about them in detail through the support documentation. The goal is to stabilize your app’s performance during page loads to improve the accessibility of various media types. To see the CMS in action, you can simply declare the permission and hierarchy of API objects using the default arguments. But before you map out the visuals, it’s best to have a clearly defined scope of the app by taking measures to scale it in conjunction with your network or server capacity. Choosing a React CMS to decouple your web servicesFor enterprise workflows, React APIs are a must-have that can shorten the time to market by automatically cleaning content backlogs and preparing for site migrations. Since there are lots of options for React CMSes, you’ll have to narrow down which libraries are capable of handling your app’s payload. If you want a composable CMS focused on developers, get one that offers a large collection of third-party frameworks or extensions in order to cover all bases of your React app. For example, you may need conditional fields to verify user accounts or support for SQL to join multiple tables containing product details. Another advantage is being able to override protocol errors or software failures that are detrimental to performance indicators before they end up on the latest build. This ensures development is productive and has room to grow into cross-platform capabilities. The cost of implementation is well worth it for specialized use cases that cover static site generators, mobile apps and web services. In return, this puts custom assets, webhooks and test scenarios at your fingertips, so you can keep adding integrations with other tools without worrying about the impact on existing code.With headless CMS functionality, you can frame API responses and multiple SaaS solutions around predictable outcomes to close the gap between React and your site content. Learn moreIf you would like to learn more about the benefits of a composable architecture, see our article “Why a composable CMS is right for you.” Schedule a free demo to experience the benefits of a composable CMS with Contentstack’s headless CMS-based digital experience platform.  

Oct 06, 2022

GraphQL vs. REST API: Which is better for querying data?

GraphQL vs. REST API: Which is better for querying data?Choosing the best API for compiling data can seem overwhelming if you don’t know how well they perform on a larger database. Developers typically use them to exchange data between programs and build functionality for their web apps. This makes it possible for the front-end and back-end teams to communicate better and develop products for consumers. The top two APIs are GraphQL and REST, each with its own pros and cons for sending a request and retrieving the result. GraphQL is considered an open-source data query and manipulation language for APIs, whereas REST is defined as an architectural standard for interacting with localized software services. As a developer, you might be curious about the potential use cases of both, as they provide a seamless environment for testing new features. Ultimately, this comes down to the scope of your project and what problems you’re trying to solve. This article will explore how they compare on multiple fronts, from fetching relevant information to sorting entries by category. Properties of REST APIREST diverges from GraphQL in that requests are grouped via endpoints and mutations can have any format besides string. It relies on a GET command to fetch resources (JSON response), which requires making multiple API calls to grab separate search results. Likewise, it is server-driven rather than client-driven architecture stacked into layers of hierarchy. Here are the key features of REST API:Each HTTP status code points to a unique response The server determines the shape and size of resourcesAbility to cache on the browser or server with a CDNHas a uniform interface that decouples the client from the serverPlenty of flexibility since calls are stateless and do not depend on each otherBenefits of REST APIREST works best on media files, hardware or web elements, mapping a linear path to those resources. A REST API will boost its performance by scaling to client demands and is capable of locating resources by name. It is built for storing common data types in memory and can be deployed from several servers in one sitting. With REST API, you get the opportunity to develop apps in all kinds of environments, due to how it integrates with a wide range of frameworks. It has been implemented in languages including Python, Java, PHP, and Ruby, enabling you to perform operations or create object instances explicitly over the protocol. On the bright side, you can easily migrate from one server to the next, or even build a portable UI across platforms and OSes. REST is ideal for automatic HTTP caching, reporting on errors, and has you covered against DDoS attacks. Nonetheless, its simplicity has some merit, being that it’s easy to extend and modify for connecting to other apps or services. Properties of GraphQLOn the other hand, GraphQL overcomes the hurdles presented by REST, as it allows the user to make targeted queries using a POST request. This is directed toward a single URL endpoint and returns the matching result if it exists in the database. GraphQL is instead arranged by schema, so the identity won’t match its fetch method. To validate queries, it will scan the cached metadata, an option not supported by REST. Here are the features that define GraphQL: A self-documenting model that conforms to the client’s graph dataThe server dictates which resources are open to the userReduces overhead communications with API providersSelects the type of operation using a keyword on the schemaA request executes multiple fields that converge at the same endpointAdvantages of GraphQLGraphQL brings many benefits to the table, shaping JSON data into a readable syntax. It expresses more consistency across operating systems, boasting faster development speeds on all platforms. It is capable of decoupling the front end from the back end to encourage work done on independent projects. To drive productivity, front-end iterations are no longer tied to back-end adjustments, placing less burden on the server. It’s also strongly typed, limiting queries to certain data types based on context. This API is designed to help you with query batching and caching by merging SQL queries to prevent a session timeout. You can look at what each function does and create custom requests to meet your users’ needs. In terms of upkeep, GraphQL will sync to updated documents and maintain version control without manual input. One advantage of GraphQL is the removal of excess data to prevent over-fetching on the fields you specify. On the flip side, you could run into under-fetching and not extract enough JSON values from an endpoint. This doesn’t happen on GraphQL because once a query is sent, the server reveals the exact data structure. Examples of When to Use GraphQL GraphQL is suitable for creating cross-platform apps that customers can access on a mobile phone. Its schema is designed for chaining longer query objects on the client side, helping you gain a better understanding of how data is extracted. GraphQL is used to enhance mobile app performance by reducing load times with fewer calls. It expands your API functionality to remedy issues reported on older versions.Schema stitching is quite convenient for modifying the client side since multiple schemas are combined into one to complete a data source. Let’s look at a few sample queries:Type Novel {    id: ID    title: String    genre: Genre    author: Author}Type Genre {    id: ID    published: Date    genres: [“Fantasy”, “Science Fiction”, “Non-Fiction”, “Adventure”, “Mystery”]    novels: [Novel]}While this example describes the fields under a Novel type, it does not give away how the object is fetched from the client. You still have to construct a Query type to access the values of a novel by its author or genre. Many applications require you to add, delete, or update data on the backend. This is achieved by utilizing three types of mutations.  Here is how you declare a mutation:First, use the mutation keyword to create one that resembles a standard query. For each field, it can take any number of arguments. mutation {    rootField(arg1: value1, arg2: value2) {        arg1        arg2    }}This root field passes in two parameters that return a specific output. Next, you’re able to assign different properties to the mutation object which will show up in the server response.Disadvantages of GraphQLGraphQL isn’t without its problems. Its single endpoint lacks caching, which is possible with a GET request, meaning that you have to implement browser caching for non-mutable queries. In some cases, GraphQL mutations can become buried under a flood of data types. Although you can pull up exact queries, you have no say over third-party client behaviors. Search operations like joining queries are more complicated than on REST microservices that route requests over a web host or URL. By default, GraphQL’s rigid queries are difficult to model for advanced aggregations or calculations. As for security, monitoring on GraphQL is practically nonexistent because only a few SaaS contain those API analytics.Examples of When to Use REST API A REST API is your best bet if users need to submit requests as separate URLs to retrieve data from microservices architecture. For projects smaller in scope, you can save memory space by importing its tools on your desired framework to designate unique ids on a handful of calls. If you’re not too fixated on collecting insights or dealing with backward compatibility, REST will do a good enough job. Generally speaking, a REST request comprises of the header, body, endpoint, and HTTP method for managing the standard CRUD operations. To initialize a basic call, you should do the following: response = requests.get(“https://siteurl.com/path.json”)print(response.json())The output to the body section: {    “parameter 1”: “value 1”,    “parameter 2”: “value 2”,    “parameter 3”: “value 3”}A successful request will return code 200 and display the types of fields (strings, integers, dictionaries) stored in that library. REST resources are distinguished by their URLs, which are recovered by delivering JSON to the server (i.e. GET, POST). To illustrate, we will perform an API call that reaches the endpoint of a user profile. Let’s jump into posting JSON to the server:{    “Id”: 529387,    “Name”: {        “First”: “John”,        “Last”: “Brown”    },    “Age”: 24,    “Occupation”: “Research Associate” }In the above example, a response returns the output of an employee who works at a biotech company. To update these fields, you also need to set the MIME type for the body to application/json. Drawbacks of REST APIAfter mobile’s rise to popularity, REST was deemed too inflexible to address network issues. Simply put, it struggled to convert app data to a graphical form by attempting to scale its unstructured parameters. If you want to grab a specific attribute, you have no choice but to create new resources and modify them across multiple roundtrips. Because REST is server-driven, clients are entirely dependent on network conditions. It often leads to nested N+1 queries, chaining API calls on the client, and making the returned URI harder to read. It introduces delays in the development lifecycle where front-end teams must wait for back-end teams to deliver the API, thereby pushing back product release dates. ConclusionThe main takeaway from all this is that GraphQL and REST API serve different purposes in the app development lifecycle. GraphQL gives the data you’re looking for without over- or under-fetching and is compatible with advanced techniques like transforming fields into different values. REST is easier to implement on JS frameworks if you plan to locate a precise resource or design an interactive website.An important thing to remember is they both have advantages and disadvantages depending on the product specifications as well as the user requirements. GraphQL may have the upper hand in an agile environment, but it still has room for improvement. REST has more existing tools and integrations; however, it can be affected by poor network conditions. 

Jul 14, 2022

What You Need to Know About E2E Testing with Playwright

Contentstack recently launched Marketplace, a one-stop destination that allows users to find, create and publish apps, connect with third-party apps and more. It aims to amplify the customer experience by enabling them to streamline operations. Marketplace now has a few Contentstack-developed apps and we will introduce more in the future.Initially, we tried to test these apps manually but found this too time-consuming and not scalable. The alternative was to use an end-to-end (E2E) testing tool (Playwright in our case), which helped us streamline and accelerate the process of publishing the apps.Playwright is a testing and automation framework that enables E2E testing for web apps. We chose Playwright because of its classic, reliable and fast approach. Besides, its key features such as one-time login, web-first approach, codegen and auto-wait make Playwright suitable for the task at hand.This article will walk you through the processes we used and the learnings we gathered.Our Testing ProcessesIn this section, we detail the processes we followed to test the Marketplace apps using Playwright.Set-up and Tear-down of Test DataPlaywright permits setting up (prerequisites) and tearing down (post-processing) of test data on the go, which helped us accelerate our testing.There are additional options available for this:global set-upglobal tear-downbeforeAll & afterAll hooksbeforEach & afterEach hooksIdeally, a test establishes prerequisites automatically, thereby saving time. Playwright helped us do that easily. Once the test was concluded, we deleted the app, content type, entry or the other data we initially set up.Playwright helped us achieve the following on the go:Auto-create app in the dev centerAuto-create content type and entryTest Directory StructureWe added all the test-related files and data to the test's repository. The following example explains the process:For the illustration app (see image below), we added the E2E test inside the 'test/e2e' folder.Next, we included the 'page-objects/pages' (diverse classes) for multiple web pages and tests. The Page Object Model is a popular pattern that allows abstractions on web pages, simplifying the interactions among various tests.We then placed the different tests (spec.js) under the test folders and the utility operations under /utilsAll the dependencies of E2E tests were put in the same .json package but under dev dependencies.We attached .env(env. sample) with correct comments to add the environment variables correctly.After that, we added support for basic auth on staging/dev.In the next stage, we added the Readme.md details about the project.We used the global-setup for login management to avoid multiple logins.Next, we used the global-tear-down option to break the test data produced during the global-setup stage.Finally, we used beforeAll/afterAll hooks to set-up/breakdown test data for discrete tests.How to Use Playwright Config Options & Test HooksGlobal-setup & Global tear-down:Both global-setup and global tear-down can be configured in the Playwright config file.Use global-setup to avoid multiple logins (or any other task later required during the test execution) before the tests start:Global set-up easily evades repetitive steps like basic auth login and organization selection.That way, when the tests are conducted, the basic requirements are already in place.Below is the example of a sample code snippet for a global set-up:Use global-tear-down to break down any test data created during the global-setup file.The test data generated using global-setup can be eliminated in global-teardown.While global-setup/global-teardown are the config option/s for an entire test suite, before/after tests hooks are for the individual tests.Test Hooks Available in PlaywrightPlaywright hooks improve the efficiency of testing solutions. Here is a list of test hooks available in Playwright:test.beforeAll & test.afterAllThe test.beforeAll hook sets test data shared between test execution like entries, creating content types and establishing a new stack. The test.afterAll hook is used to break or tear the test data. This option helps eliminate any trace of data created for test purposes.test.beforeEach&test.afterEachThis hook is leveraged to set up and break down test data for individual tests. However, the individual text execution and the concurring data might vary. Users can set up the data according to their needs.Tips & Tricks for Using PlaywrightWhile using Playwright, we learned a few valuable lessons and tips that could be useful to you:Using the codegen feature to create tests by recording your actions is a time-saving approach.You can configure Retires in the playwright config file. It helps in case of a test failure. You can re-run the test to come up with relevant results.The Trace Viewer allows you to investigate a test failure. This feature includes test execution screencast, action explorer, test source, live DOM snapshots and more.Use the timeout feature for execution and assertion during testing.By setting up a logger on Playwright, you can visualize the test execution and breakpoints.Using the test data attributes during a feature development navigates the test through multiple elements, allowing you to identify any element on the DOM quickly.Recommended Best PracticesWhile using Playwright for E2E testing of our marketplace apps, we identified a few best practices that might come in handy for other use cases.Parallelism:Test files run by default on Playwright, allowing multiple worker processes to run simultaneously.Tests can be conducted in a single file using the same worker process.It's possible to disable the test/file execution and run it parallelly to reduce workers in the config file.The execution time increases with the number of tests; parallelly running tests are independent.Isolation:Each browser context is a separate incognito instance, and it's advisable to run each test in individual browsers to avoid any clash.In isolation, each browser can emulate multi-page scenarios.It's possible to set up multiple project requirements in playwright config as per the test environment similar to baseURL, devices and browserName.Speed of Execution:Parallel test execution, assigning worker processes and isolation expedite the running of test results.Elements like test data, global tear-down and set-up affect the execution speed regardless of the number of worker processes.Double Quotes Usage:Use double quotes if you come across multiple elements on the exact partial string.Help establish case sensitivity. For instance, awaitpage.locator('text=Checkout') can return both elements if it finds a "Checkout" button and another "Check out this new shoe."The double usage quotes can also help return the button on its own, like await page.locator('text="Checkout"'). For details, check out the Playwright text selectors.Prioritizing User-facing Attributes:It's advisable to use user-facing elements like text context, accessibility tiles and labels whenever possible. Avoid using "id" or "class" to identify elements. For example, use await page.locator('text=Login') instead of await page.locator('#login-button') is recommended.A real user will not find the id but the button by the text content.Use Locators Instead of Selectors:Locators will reduce flakiness or breakage when your web page changes. You may not notice breakages when using standard selectors.Example:se await page.locator('text=Login').click() instead of await page.click('text=Login').Playwright makes it easy to choose selectors, ensuring proper and non-flaky testing.Wrapping UpIn a world dominated by Continuous Integration and Delivery, E2E testing is the need of the hour. Though it's a tedious task, following the practices above will save you time and improve your product.

Apr 12, 2022

Zero-cost Disaster Recovery Plan for Applications Running on AWS

Statistics show that over 40% of businesses will not survive a major data loss event without adequate preparation and data protection. Though disasters don’t occur often, the effects can be devastating when they do.A Disaster Recovery Plan (DRP) specifies the measures to minimize the damage of a major data loss event so businesses can respond quickly and resume operations as soon as possible. A well-designed DRP is imperative to ensure business continuity for any organization. If you are running an application, you must have a Disaster Recovery Plan in place, as it allows for sufficient IT recovery and the prevention of data loss. While there are traditional disaster recovery solutions, there has been a shift to the cloud because of its affordability, stability, and scalability. AWS gives the ability to configure multiple Availability Zones to launch an application infrastructure. In an AWS Region, Availability Zones are clusters of discrete data centers with redundant power, networking, and connectivity. If downtime occurs in a single availability zone, AWS will immediately shift the resources to a different availability zone and launch services there. Of course, downtimes do occur occasionally. To better handle them, you should configure the Auto Scaling Groups (ASGs), Load Balancers, Database Clusters, and NAT Gateways in at least three Availability Zones, to withstand (n-1) failures; that is, failure of two availability zones (as depicted in the diagram below). Disaster Management within an AWS RegionRegional Disaster Recovery Plan Options A regional disaster recovery plan is the precursor for a successful business continuity plan and addresses questions our customers often ask, such as:What will be the recovery plan if the entire production AWS region goes down? Do you have a provision to restore the application and database in any other region?What is the recovery time of a regional disaster?What is the anticipated data loss if a regional disaster occurs?The regional disaster recovery plan options available with AWS range from low-cost and low-complexity (of making backups) to more complex (using multiple active AWS Regions). Depending on your budget and the uptime SLA, there are three options available:Zero-cost optionModerate-cost optionHigh-cost optionWhile preparing for the regional disaster recovery plan, you need to define two important factors:RTO (Recovery Time Objective) i.e. the time to recover in case of disasterRPO (Recovery Point Objective ) i.e. the maximum amount of data loss expected during the disaster Zero-cost Option: In this approach, you begin with database and configuration backups in the recovery region. The next step involves writing the automation script to facilitate the infrastructure launch within a minimum time in the recovery region. In case of a disaster, the production environment is restored using the existing automation scripts and backups. Though this option increases the RTO, there is no need to launch any infrastructure for disaster recovery.Moderate-cost Option: This approach keeps a minimum infrastructure in sync in the recovery region, i.e. the database and configuration servers. This arrangement reduces the DB backup restoration time, significantly lowering the RTO. High-cost option: This is a resource-heavy approach that involves installing load balancers in the production environment across multiple regions. Though it's an expensive arrangement, with proper implementation and planning the application is successfully recovered with little downtime for a single region disaster. Zero-cost Option: The Steps The zero-cost option does not require the advance launch of additional resources in the recovery region; the only cost incurred is for practicing the DR drills. Step 1: Configure Backups At this stage, reducing data loss is the top priority. The first step is configuring the cross-region backups in the recovery region. With a proper backup configuration, you can reduce RPO. It's essential to configure the cross-region backups of: S3 bucketsDatabase backupsDNS zone file backupsConfiguration (chef/puppet) server configurationCICD (Jenkins/GoCD/ArgoCD) server configurationApplication configurationsAnsible playbooksBash scripts for deployments and cronjobsAny other application dependencies required for restoring the applicationStep 2: Write Infrastructure-as-a-Code (IaaC) Templates - Process Automation Using IaaC to launch the AWS infrastructure and configure the application will reduce the RTO significantly, and automating the process will lessen the likelihood of human errors. Many automation tools are widely available. Terraform code to launch application infrastructure in AWSAnsible playbooks to configure Application AMI, Chef server, CICD servers, MongoDB Replica Sets Clusters, and other standalone serversScripts to bootstrap the EKS clusterStep 3: Prepare for a DR Drill The preparation for a DR drill should be done in advance through a specified process. The following is a sample method to get ready for a DR drill: Select an environment similar to the productionPrepare a plan to launch complete production infrastructure in the recovery regionIdentify all the application dependencies in the recovery regionConfigure the cross-region backup of all the databases & configurationsGet ready with automation scripts with the help of Terraform, Ansible, and Shell-ScriptsIdentify the team members for DR Drill and make their responsibilities known Test your automation scripts and backup restoration in the recovery regionNote the time taken for each task to get a rough estimate of the drill timeStep 4: Execute the DR Drill The objective of the DR drill is to test the automation scripts and obtain the exact RTO. Once the plan is set, decide a date and time to execute your DR drill. Regular practice is advisable to perfect your restoration capabilities. Benefits of DR DrillsPracticing DR Drill boosts confidence that the production environment can be restored within a decided timeline.Drills help identify gaps and provide exact RTO and RPO timelines. They provide your customers with research-backed evidence of your disaster readiness. Conclusion Though AWS regions are very reliable, preparing for a disaster is a business-critical SaaS Application requirement. Multi-region or Multi-cloud deployments are complex, expensive architectures, and deciding the appropriate DR option depends on your budget and uptime SLA to recover during such disasters.