Back to Blogs

Deploying AWS Lambda Code In Different Environments

Keval R. GohilJun 05, 2020

Talk to an expert

about something you read on this page

Contact an expert

The beauty of AWS Lambda is that it allows you to deploy and execute code without needing any physical server. In other words, you can set up “serverless” architecture using AWS Lambda.

When deploying your code, you may want to deploy it in various environments (such as testing, development, and production), so that you can test your code before going live.

This article takes you through the steps of setting up Lambda functions in different environments by using the AWS Lambda function alias and versioning. By the end of this blog, we will have learned how to do the following:

  • Create an alias for AWS Lambda function
  • Associate the AWS Lambda function alias with the AWS API Gateway stage
  • Secure the AWS API Gateway stage with API Key and Rate Limiting
  • Reassociate versions with the alias

These are the essentials for managing different environments within the serverless system and for the smooth rolling out of releases.

Create an Alias for AWS Lambda Function

The following section shows you how to create environment aliases for a Lambda function and associate them with function versions. We will create two aliases for the development and production environments as an example.

Create a Lambda function

To create a Lambda function, follow the steps below:

  1. Login to your AWS Management Console, select AWS Lambda service from the Service list.
  2. Click on the Create function button, and then the Author from scratch option.
  3. Configure the lambda based on your requirements. Choose Node.js 12.x as your run-time language and click on the Create function button.


Publish Version

After creating a Lambda function, you can publish the version as follows:

  1. Go to the newly created version.
  2. Select Publish new version from the “Actions” drop-down menu:


  1. Publish the new version with an appropriate version description, for example, “Initial Stable Version.”


Create an Alias for Each Environment

The next step is to create an alias for each environment, as shown below:

  1. From the Actions drop-down menu, select Create a new alias.
  2. Add the alias Name, Description, and Version, as shown in the following example:


Note: You have to perform this step twice, first for the development environment and second for the production environment.

  1. The version for the development environment should be $LATEST and for the production environment, it should be 1 which we created in the Publish Version section above.


These steps ensure that a specific version of the Lambda function is assigned to the production environment, and the latest version is assigned to the development environment.

Associate the AWS Lambda Function Alias With the AWS API Gateway Stage

This section shows how to create a new AWS Lambda API Gateway and associate the production and environment stages with the respective Lambda function aliases.

Create a New REST API

REST APIs help the Lambda function to fetch and send data. So, the next step is to create a REST API using the steps given below:

  1. Log in to AWS Management Console and select API Gateways from the services list.
  2. Click on the Create API button.
  3. On the Choose an API type page, go to the REST API option (the public one) and click on Build.
  4. On the next page, ensure that the Choose the protocol section has REST checked, and the Create new API section has New API checked. Enter the API name in the Settings section and click on Create API.


  1. Now add a new resource to the API gateway as shown in the following screenshot:


  1. Add a simple method of the required type. In this example, we have created a GET method for "/demo-resource".


Deploy and Add Stage Variable

Now that the API is created, let’s deploy it.

  1. Deploy the API in two new stages and name them development and production.
  2. In the Stages tab of API Gateway, navigate to each deployed stage and add the stage variable with key as lambdaAlias and the name of the stage as value.


Associate Stage With Lambda Alias

  1. In the Resources tab of API Gateway, select the REST API method that we created in the above.


  1. Click on Integration Request, which is associated with Mock by default.
  2. Select the Lambda region in the Integration Request section and set name to the name of the Lambda function followed by :${stageVariables.lambdaAlias}. In our example, we have named it demo-function:${stageVariables.lambdaAlias} as shown below:


  1. After clicking Save, a prompt appears with a CLI command and instructions to add permission for each stage. Execute the command from AWS CLI by replacing :${stageVariables.lambdaAlias} with each stage name.
  2. After you execute the command, your Lambda stages successfully attach to the respective Lambda function aliases.

Securing AWS API Gateway Stage With API Key and Rate Limiting

The following section shows how to add the API key security to the API gateway and apply appropriate rate-limiting to safeguard our respective environments.

Create Usage Plan

  1. Create an API usage plan for development and production environments with appropriate throttling and quota for the respective stage.


  1. Note: For more information on the rate-limiting algorithm, read the following guide:
  2. Now associate the API gateway stage with the respective usage plan while creating. In our use case, we associate the development usage plan with the development stage of the API gateway.


Create and Add Api Key to Usage Plan

  1. Select each usage plan and go to the API Keys tab.


  1. Click on Create an API Key and add it to the Usage Plan and add the API key.


  1. After creating the API key, click on the newly created API key from respective usage plans.

Make API Key Mandatory For Resource

  1. Go to the Resource Method created for the API.
  2. Within the Method Execution section, click on Method Request.
  3. From the Settings section, set API Key Required drop-down to true.


  1. Deploy the API gateway to both the created stages for respective environments.

After completing the above steps, you will require the respective API keys to access the stages of the API gateway associated with different aliases.

Reassociating Version With Alias

This section demonstrates how you can update the alias version for a Lambda function, which you can use to either associate a new version to a production alias or revert code to an earlier version.

Publish New Version

  1. Publish a new version of the latest Lambda function when development is completed.
  2. This step is optional if you are trying to revert the code to an earlier version.

Switch Alias

Select production alias from the Qualifiers drop-down of the AWS Lambda function.


Update Alias Version

  1. After switching alias from the drop-down, scroll down to the Aliases section of the Lambda function and select a new version for the alias.


  1. Click on Save to complete the reassociation of the version with the alias.

The above steps should help you set up multiple environments in AWS Lambda. We can ensure that the development URL always gets the result from the latest Lambda function and the production environment is bound to one fixed version of the Lambda function.

Share on:

Talk to an expert

about something you read on this page

Contact an expert

Recommended posts

Sep 22, 2023 | 4 min. read

Is integrating your digital asset management system with a DXP a good idea?

In today's digital world, organizations are constantly looking for ways to improve the experiences they provide to their customers. One way they can accomplish this is to integrate their existing digital asset management (DAM) system with a digital experience platform (DXP).In this blog, we'll look at how a DXP and a DAM system differ when it makes sense to integrate your existing DAM system with a DXP and some of the benefits of this type of integration.What is a digital asset management system?A DAM system stores organizes, and manages digital content, including images, videos, graphics, and documents for use across an organization. DAM systems are typically used by companies that must control a large volume of digital assets while also remaining compliant with regulations applicable to their industries. DAM software securely stores and preserves data from loss while limiting access via workflows and user controls.While DAM systems are great for protecting and storing critical digital assets, there often must be a way to seamlessly deliver these assets to customers on the front-end presentation layer. That's where the digital experience platform can make the difference.What is a digital experience platform (DXP)?A DXP software platform helps organizations create, manage, and deliver exceptional digital experiences across multiple channels. Composable DXPs enable organizations to integrate their existing tech stack, including their DAM system, into one platform to create more streamlined and seamless user experiences (UX).A headless CMS is an essential component of the composable DXP because this decouples the back end from the front end so that each area can be developed separately. Yet, they can still communicate via an application programming interface (API) so that assets on the back end can easily be called up for delivery to multiple channels, including websites, smartphones, native apps, and social media.When it makes sense to integrate a DAM system and DXPWhile a DXP alone can serve as a central repository for storing and managing all types of digital assets, replacing an existing DAM system may sometimes be feasible. Or an organization may be unwilling to move large volumes of data from their DAM system to a DXP. However, while the DAM software securely stores digital assets with workflows and user controls, it only sometimes provides an easy way for organizations to leverage these assets to improve user and customer experience.A DXP and DAM system integration can help an organization to centralize the management of its digital assets, improve the searchability and discoverability of these assets, and streamline omnichannel delivery while at the same time protecting assets from unauthorized use.A DXP can be integrated with your full tech stack, including the DAM system, proprietary software, analytics tools, marketing automation, CRMs, and more. One of the excellent capabilities of a composable DXP is its modular and decoupled CMS so that apps and integrations can take place over time for seamless, uninterrupted user and customer experiences. This puts the organization in control of prioritizing when integrating each of its systems with the DXP and when to roll out new features and functionality.While integrating your tech stack with a DXP can take some time, it's well worth the effort because it empowers organizations to keep up with customer expectations for more personalized and relevant digital experiences on all their channels based on real-time feedback.The benefits of integrating DXPs and DAM systemsThe integration of DXPs and DAM systems can provide several benefits for organizations, including:Increased efficiency Organizations can save time and resources by centralizing the management of digital assets. This is because they no longer need to maintain multiple asset management systems.Improved asset managementDAM software provides powerful features for managing digital assets across teams, such as asset tagging, version control, and workflow automation. DAMs can help organizations to keep their assets organized and up-to-date.Enhanced content deliveryDXPs can help content creators deliver content to various channels, such as websites, mobile apps, and social media. This can help organizations reach users on the device or channel of their choice and expand their audience.Personalized digital experiencesDXPs can be used to personalize digital experiences for individual users by gauging user feedback quickly and using data from the DAM system to select the most relevant assets for these users. This can help organizations to engage with their customers on a more personal level.Things to consider before making a decisionWhen choosing an integration solution, it is essential to consider your organization's unique needs. Some factors include the organization's size, the number of digital assets to be managed, and the desired security and compliance features.Second, you need to think about your digital strategy. If you want to create and deliver personalized digital experiences, a DXP can help you, even if you already have a DAM system.By integrating your existing DAM system with a composable DXP, you can enhance marketing automation and ensure your marketing teams can easily access your latest and greatest digital assets. Then, they can leverage them to create more engaging and personalized experiences for greater customer satisfaction.Finally, you need to think about your budget. Integrating a DAM system with a DXP can be a significant investment. But if you're serious about creating and delivering outstanding digital and customer experiences, it's an investment that's worth making.Here are some other essential things to consider before deciding to integrate your DAM system with a DXP:Not all DXPs are created equal. Ensure your chosen platform is composable to integrate with your full tech stack, including your DAM system.Choose the correct integration approach. There are several ways to integrate a DXP with a DAM. One standard method is to use an API. The DXP can use the API to access the DAM's assets and then deliver them to the desired channel. Another approach is to use a plugin. A plugin can be installed on the DXP to make it easier to integrate with the DAM.Plan for the integration. Integrating two systems can be a complex process. It's essential to plan carefully and to involve all stakeholders in the process ahead of time.Provide training. Once the systems are integrated, it's critical to provide training to your users. This will help them understand how to use the new system and how it can benefit them.Learn MoreBy centralizing the management of digital assets and improving the searchability and discoverability of those assets, organizations can deliver more personalized and relevant digital experiences to their customers by integrating their existing DAM system with a DXP. To learn more about our composable DXP, schedule a demo today.

Sep 01, 2023 | 4 min. read

Does your organization need a digital asset management (DAM) system?

Delivering excellent customer experiences in the digital age requires a lot of content in various formats. That's why businesses are generating and storing more content than ever before. However, organizing, managing, and assessing this content can become a real challenge with more volume.Digital asset management (DAM) systems can help organizations solve this problem. In this blog post, we will explain what a DAM system is and cover its essential components. We'll also explore the benefits of implementing a DAM system, how to choose the right platform, best practices for implementing DAM, and more.What is a digital asset management (DAM) system?DAM systems store, organize, and distribute digital assets, and they have features like tagging, version control, and history tracking for efficiently managing these assets. They serve as repositories for many different types of content, including images, videos, documents, audio files, presentations, and more. A DAM platform serves as a single source of truth for all the different teams in an organization, from marketing to business development, enabling collaboration between these teams.How does a DAM system differ from a content management system (CMS)? A traditional CMS only manages the content for your website. DAM software can manage content across your organization, allowing content to be used across multiple channels, not just your website.On the other hand, a DAM system can't publish content to your website or other channels. Your organization will still need a CMS or digital experience platform (DXP) to push digital assets to your website and other channels. To learn more about improving efficiency by integrating a DAM system with a DXP, read our blog, "Is integrating your digital asset management system with a DXP a good idea?"Again, the DAM system stores digital assets so they're easy to access and manage, while a CMS or DXP distributes them to where they need to be seen – for example, websites, mobile apps, and social media.Four main benefits of leveraging DAM softwareThere are many reasons why businesses need to have a well-organized and efficient DAM system.First, it can help improve content creation and collaboration efficiency. When all of a business's digital assets are stored in one centralized location, it's easier for team members to locate and access the assets they need for marketing campaigns and other initiatives. It's also easier to share these assets and collaborate about them with others in the organization. This can save time and resources, improve the quality of content, and optimize the speed at which it can be delivered.Second, a well-organized DAM system can help improve the search and retrieval of digital assets. When assets are correctly tagged, and metadata is managed effectively, it's easier for users to find the necessary assets quickly. This can save time and frustration and help ensure the right assets are used for suitable projects.Third, a DAM system can ensure consistent branding and messaging across all the business's digital assets. When assets are stored in a centralized location and tagged with consistent metadata, ensuring they all use the same branding and messaging is easier.Lastly, a DAM system reduces unnecessary duplication and wasted resources. When assets are stored in one central location, tracking which assets have been used and when they are more accessible can help prevent team members from creating duplicate assets, saving time and money.Essential components of DAM systemsAre platforms like Dropbox, Google Drive, and OneDrive considered DAM systems? The answer is no. Even though these platforms provide some basic capabilities for managing digital assets, they need to have DAM platforms' robust features and functionality.Efficient DAM systems have several essential components. These include:Centralized storageA DAM system should provide a centralized repository for storing all digital assets across an organization, making them easier to locate and use when needed.Tagging and metadata managementA DAM system should allow users to tag and manage metadata for their digital assets. This makes searching and retrieving assets easier and helps ensure consistent branding and messaging.Version controlA DAM system should provide version control for digital assets. This means users can track asset changes over time and revert to previous versions if necessary.History trackingA DAM system should track the history of all changes to digital assets. This makes it easy to see who made changes to assets, when the changes were made, and why the changes were made.Search and retrievalA DAM system should have robust search and retrieval capabilities, enabling the assets to be located even without an exact filename.CollaborationA DAM system should allow users to easily collaborate on digital assets across many different teams in an organization.SecurityA DAM system should provide robust security features to protect digital assets from unauthorized access.ReportingA DAM system provides useful reporting features that can help businesses track how their digital assets are being used and how often specific assets are used. This can help to identify over or under-utilized assets.Choosing the right platformChoosing the right DAM system is essential for any business that wants to improve its digital asset management. Here are a few points to consider when shopping for the right system.Organizational needs and goals: What are the specific needs of your organization? What is the main reason for implementing DAM software? What plans does the business hope to achieve once the system is implemented?Features: What features does your organization require? Does it need a system with version control? Does the system need to integrate with existing software? Is this also an excellent time to upgrade your CMS or implement a new DXP?Scalability: How much growth do you expect in the future? Do you need a system that will evolve as your business scales or changes?Cost: How much will your company spend on a DAM system?Once you have considered these factors, it's time to narrow down your choices. Check out our Marketplace for DAM providers who partner with Contentstack.Implementation best practicesOnce you have chosen a DAM system, be mindful of these implementation best practices.Get buy-in from stakeholders. A successful DAM implementation requires the support of all stakeholders from the top down in an organization. Communicate the benefits of DAM to everyone affected by the system and gather feedback.Set realistic expectations. DAM is not a magic bullet. It takes time and effort to implement and manage a DAM system effectively. Expect to see results after some time.Be flexible. As your needs change, you may need to adjust your DAM system. Be prepared to make changes as required.Start small and scale up. Don't try to implement a DAM system that is too complex or ambitious for your organization. Start with a few assets and users, then gradually expand the system as required.Get help from a consultant. If you're unfamiliar with DAM software, consider seeking the advice of a consultant. A consultant can help assess business needs, choose a DAM system, and implement the system successfully.What to expect during implementationThere are four basic steps when implementing a DAM system:Configuring the DAM system. Once your organization has chosen a DAM solution, it must be configured to meet the specific needs of your business. This includes setting up user permissions, creating metadata fields, and configuring workflows.Migrating your assets to the DAM system. This is the process of transferring your existing assets to the new DAM system. It is essential to do this carefully to avoid data loss or corruption.Training users. Once digital assets have been migrated to the DAM system, users must be trained to use it. This includes teaching them how to search for assets, manage permissions, and create workflows.Monitoring and maintaining the DAM system. Once the DAM system is up and running, you must watch it to ensure it performs as expected. This includes monitoring the system's performance, security, and compliance.Measuring successHere is a sample of the metrics you can track to measure your DAM implementation's success and demonstrate a return on investment.Asset retrieval time: How long does finding and retrieving the necessary assets take?Collaboration efficiency: How easy is it for team members to share and collaborate on digital assets?Cost savings: How much money have you saved by implementing a DAM system?Productivity improvements: Have you seen any improvements in productivity since implementing a DAM system?Brand consistency: Are your digital assets more consistent with your brand guidelines than pre-implementation?Future Trends in Digital Asset ManagementDAM systems are constantly evolving. Here are a few trends to watch for in the future:AI-powered metadata tagging and auto-classification: AI can automate the tagging and classification of digital assets. This can save businesses time and resources.Integration with emerging technologies: Companies increasingly integrate DAM systems with technologies like AR/VR and voice assistants. This makes it easier for businesses to share digital assets with their customers and partners.Evolving role of DAM in a dynamic digital landscape: DAM is becoming increasingly important in a dynamic digital landscape. Businesses need to manage their digital assets effectively to stay competitive.Learn moreDAM systems can be a valuable asset for organizations of all sizes. Businesses can improve their digital asset management, productivity, and brand consistency by choosing the right DAM system and implementing it correctly. Schedule a free demo today to learn more about Contentstack's composable digital experience platform or how this can work with a DAM system to improve user experience.

May 23, 2023 | 5 min. read

5 best practices for improving customer satisfaction

Customer satisfaction, a measure of how much a company’s products or services meet or exceed its customer’s expectations, continues to dominate the business world. Customer satisfaction directly correlates with and translates to customer happiness, which is reflected in your business ratings. It is an important metric that helps measure how well a business is meeting the needs and expectations of its customers. Therefore, understanding and enhancing customer satisfaction is critical to ensuring long-term success for your business. Why customer satisfaction mattersWith more than 96% of customers claiming that customer service is essential to brand loyalty, it’s no secret that customer satisfaction is vital for your business's growth. It is the key to keeping your current customers and retaining new ones. Customer satisfaction directly affects customer loyalty, and it affects how customers may intend to associate with your brand in the future.Therefore, it’s crucial to ensure that your customers are happy with your products and services. Customer satisfaction provides insight into things that need improvement or ways to improve your services or product to serve your customers better. A high level of customer satisfaction shows that a business is providing quality products or services, meeting customer expectations and delivering an overall positive experience. In today's digital world, understanding the market from the end user's perspective is the need of the hour. For example, in a product-based SaaS organization, recognizing customer requirements, curating the product according to their needs, and understanding best practices for improving customer satisfaction can help you get an edge over your competitors.Top 5 best practices for improving customer satisfactionSo, what factors affect customer satisfaction? How do you improve it? Understanding what drives customer satisfaction is a must to improve it. Here are the top five best practices for improving customer satisfaction in your SaaS business:Provide convenience with in-app chat featuresConvenience lets customers use your products or services without hassle. There’s comfort in knowing that everything will be taken care of, no matter what.A virtual assistant or chatbot is one way to provide convenient customer service. The in-app product chat feature provides the easiest way for customers to connect with your support team or agent. Customers always look for an easy way to connect quickly with the support team in case they need help while using the product or if they have any feedback about a feature. However, there should not be limitations to reaching out to the support team or agent only via chat support; an email or contact number should be available to expedite the process.Deliver the human touch with personalization Delivering a personalized customer experience helps establish a strong emotional bond with your customers. Research suggests that 80% of customers are more likely to buy products or services from a brand offering them personalized experiences.Providing a human touch or lively experience is essential, which is not the case when implementing a chatbot for answering queries. Customers expect to get the most relevant answers to their queries with little back and forth. Most of the time, organizations implement a chatbot by designing it to provide the most appropriate answers to the questions asked by the customers. However, after a certain point, bots are not self-sufficient to answer these accurately. In such instances, a live customer support agent can interact with the client to gather the required information by probing for correct questions. This gives additional assurance to the client that the team is looking into their query and that they will get a resolution soon.Track response and turnaround times (FRT and TAT)The quicker the response to a customer’s complaint, the better it is for the customer and the business. First Response Time (FRT) and Turn Around Time (TAT) are the most critical factors in engaging customers. First response time (FRT) is a metric used to measure the time it takes for a business to respond to a customer's initial inquiry or request for assistance. For example, this could be a customer support email, phone call, or message on social media.FRT is crucial because it directly impacts customer satisfaction. Customers generally expect a quick response to their inquiries, and delays in response time can lead to frustration and a negative experience. A prompt first response time can help establish trust and build a positive relationship between the customer and the business.When customers reach out to the support team, they expect an initial reply or acknowledgement of their query. A prompt reply or acknowledgement assures customers that the organization is dedicated to understanding their problems and helps win customer trust and improve satisfaction. In customer service, turnaround time (TAT) is often used to measure the time it takes to resolve customer inquiries or complaints. This includes the time it takes to provide a first response (FRT) plus the time it takes to resolve the issue entirely. A low TAT generally indicates that a business can quickly and efficiently resolve customer issues, leading to higher customer satisfaction and loyalty.Organizations should serve customer requests 24/7, across all time zones. It is important to share regular updates on specific cases, so use a unique reference number that can be shared with the client so that they can reach out for updates.Resolving customer-reported bugs in a timely manner helps unblock users and positively affects the organization. Once a bug is resolved and/or an enhancement is implemented, updating customers with a unique reference number is a key factor in improving customer satisfaction. Obtain customer feedback Obtaining customer feedback is essential for enhancing customer satisfaction as it helps businesses gain insight into their customers' needs, preferences and pain points. By acting on this feedback, businesses can make necessary changes to meet customer needs and expectations better, leading to increased satisfaction and loyalty.Feedback helps identify the gaps between customers and businesses. Understanding customer needs via feedback is very important so details can be discussed with the product team to implement them within the current functionality. A transparent approach to customer feedback collection ensures customers are heard, improving their overall experience. Ensure customer success with proper onboarding Earlier in the onboarding process, determining the customer's end goal for using your solution and defining milestones to achieve that goal improves your customer's experience and assures them that they are in the right hands moving forward. Connecting with customers is important. Schedule regular meetings to understand their workflow and hand-holding (when necessary) until they are fully live. This is usually done with dedicated customer success managers and solutions architects assigned to specific customers. It helps them achieve their use case, clear any roadblocks with the product and get technical guidance when needed. CSMs also help clients to expedite resolving the important features or bugs they may experience. A wonderful onboarding experience engages the customers better, making them less likely to churn and more likely to make repeat purchases. Also, a solid onboarding experience makes customers feel valued while increasing their product adoption.Final thoughtsThe right strategies and best practices can help improve customer satisfaction, propelling your SaaS product's trajectory toward reduced churn and increased business revenues. Investing in contemporary customer satisfaction strategies (artificial intelligence, visual tools and an omnichannel approach) enhances your ability to offer personalized experiences. Most importantly, increased customer satisfaction keeps your customers returning and is directly linked to growing your business' topline revenue.

Mar 28, 2023 | 7 min. read

From legacy systems to microservices: Transforming auth architecture

Contentstack receives billions of API requests daily, and every request must be validated to be a valid Contentstack identity. It is a common industry practice to achieve this using some sort of “identity token" for every request. Imagine having multiple types of identity tokens, such as session tokens, OAuth tokens, delivery tokens, management tokens, etc. The problem of securing billions of API requests daily can be challenging. We decided to address this by spinning up a new team that handles the complex problems of user authentication and authorization in a token-agnostic platform.Our transition journey Contentstack started as an API-first headless CMS platform that allowed content managers to create and manage content while simultaneously and independently enabling developers to use Contentstack's delivery API to pull that content and render it to create websites and applications. This means that Contentstack’s traffic increases proportionately to the traffic received by our customers' websites and applications.With increased traffic and usage, we catered to various new use cases by developing new features. These features were powered by a set of microservices, each catering to a particular feature domain and needing support for processing multiple identity tokens that had roles and permissions associated with them. The whole system had turned out to be quite complex, and performing auth had become a great challenge. This prompted us to redesign our auth architecture, which addressed the issues of being a token-agnostic and low-latency platform.Read on to learn more about this journey and how we have been able to:Transition from a monolith to a low latency microservices-based auth (authentication plus authorization) and rate-limiting architecture.Set up centralized authentication for multiple (any domain) microservices that are part of the same Kubernetes cluster.Set up decentralized and self-serviced, policy-based authorization for internal services and teams.Increasing feature sets increased domain microservices, which increased the complexity of performing auth.Monolithic auth architectureMonolithic architectures can be difficult to maintain, scale and deploy. In a monolithic architecture, user authentication and authorization are typically tightly coupled with the application code, making it difficult to implement and maintain robust security measures. Monolithic architectures often rely on a single authentication and authorization mechanism for the entire application, which can limit the flexibility of the system to accommodate different types of users or access levels.Performing auth in a typical monolithic architecture.In monolithic architectures, the steps involved in auth are the following:Users use their credentials at the client to generate a session token or use an existing identity token to generate other identity tokens.Users then use the generated identity token to perform a business operation by making a request to the application server.Once a request is received at the application server, the authentication middleware authenticates the token and forwards the request to the business module.The business module performs the business operation based on the authorization rules applied to the user identity.Problems with monolithic auth architecture:Authentication and authorization logic is mixed with the business logic.Changing the way an identity performs an operation on a resource involves a change in the associated auth-related logic.Each domain individually implements the authorization logic, causing a difference in implementation.Since authorization logic is deeply nested in business logic, we lack visibility into authorization rules applied to a resource.Shipping of new authorization logic requires a fresh deployment of the application image.New microservices require knowledge of various identity tokens and resource authorization rules to be applied.Microservices auth architectureMicroservices offer a more flexible, modular approach that allows for easier maintenance, scalability and deployment. With microservices, each service can be developed, deployed and scaled independently, allowing for faster time-to-market, improved fault tolerance, and better alignment with modern development practices. Additionally, microservices offer more efficient use of resources and better support for diverse technology stacks.AuthenticationWhy centralized authentication?Centralized authentication is a security model in which a central authority manages authentication, such as a server or service, rather than it being distributed across multiple systems or applications. There are several reasons why centralized authentication is commonly used and considered advantageous, including increased security, simplified management, improved user experience and lower costs. While there are some drawbacks to centralized authentication, such as the increased risk of a single point of failure and increased complexity in managing the central authority, the benefits often outweigh the risks. Centralized authentication and rate-limiting at the edge of the service mesh.The steps involved in the centralized authentication process are the following:Any incoming request to the Kubernetes cluster first lands at the Istio ingress gateway.The request containing the identity token is proxied to a central authentication gRPC service with the help of envoyproxy's external authorization filter.The central authentication service queries Redis with the identity token and metadata associated with the request.Redis responds with the identity associated with the token and the current rate-limit count based on the request metadata.The central authentication service responds to Istio with either of the following:Authenticated response with user context attached to the request in the form of request headersUnauthenticated responseRatelimit exceeded responseAn authenticated request containing the user context is then forwarded to the upstream service.Advantages over the monolithic architecture:Easier to onboard newer microservices to central authentication service by using label based istio-injection.All requests are authenticated and rate-limited at the edge of the service mesh, ensuring that each request entering the cluster is always rate-limited and authenticated.The request forwarded to the upstream microservice has user identity context attached to it in the request headers, which can be further used for applying authorization rules.Keeping centralized authentication eliminates the problem of multiple mutations performed by the upstream microservices on the identity of the token.Authorization Centralized authorizationWe tried a model where along with authentication and rate limiting, we also added authorization as a responsibility of the central authentication and rate limiting service. The service would first identify the incoming request’s identity from the token and apply rate limiting based on the request metadata. Once the user identity is known, authorization rules could be applied to the user’s identity, thereby performing the entire Auth at the edge of the service mesh. Problems with this model are the following:This model could only perform basic authorization at the edge based on the request metadata provided, such as validating organizations, stacks, etc. However, it could not perform fine-grained authorization, such as finding out which content types the logged-in user had access to.For RBAC, each domain has its roles and permissions associated with it; performing authorization for such requests requires knowledge of the upstream domain and leads to the addition of domain-specific logic in the centrally managed domain-agnostic platform.With newer domain microservice additions, this again would lead to the problem of lacking visibility into authorization rules applied to a resource.Distributed authorization with central authorization serviceWe then tried implementing a model where we distributed authorization to the upstream microservices where each upstream microservice makes a call to a central authorization service. The authorization service has access to all the roles and permissions of different domains and was able to give appropriate authorization results. Authorization could now be performed from the upstream service’s business module by making a network request using Kubernetes cluster networking to avoid making a call over the internet.Problems with this model are the following:The central authorization service becomes a single point of failure.Any change in the API contract defined by the central authorization service requires all the upstream services to abide by it and makes shipping these changes independently a complex task.Performing authorization adds a network hop, thereby increasing the latency.Distributed authorization with the sidecar patternLearning from the previously discussed disadvantages, we wanted to build a model that had authorization distributed, low latency and made shipping authorization logic an independent activity. ArchitectureThe architecture involves the following components:Auth sidecarCentral policy serviceAuth SDKArchitecture for authorizing an authenticated request with the sidecar pattern.Auth sidecarThe auth sidecar is a gRPC service that gets injected along with the microservice’s application container in the same Kubernetes pod. Let’s understand how this architecture helped us tackle the previously mentioned problems.Single point of failure: The auth sidecar service runs with the application container in the same pod, and any case of failure is only limited to the current pod. Restarting the pod gives us a fresh set of application and auth sidecar containers.Independent delivery: Since the auth sidecar service container is shipped along with the application container, the application service can decide which version of the sidecar image to use, thereby making the delivery of newer versions of the authorization sidecar independent.Low latency: There is no network hop involved in making a gRPC call to the auth sidecar running in the same pod. This helps the application to get the authorization result with very low latency (in a few milliseconds).Updating authorization logic: The auth sidecar periodically downloads fresh policy bundles; any time there is a change in policy bundle coming from the central policy service, the auth sidecar updates its local policy cache with the new bundle.This way, updating authorization logic does not involve a fresh deployment/restart of the application container.Components involved in auth sidecar Responsibilities of the components involved in the authorization sidecar.Aggregator: The responsibility of the aggregator is to fetch authorization-related data for the current identity based on the metadata provided by the application service in the gRPC call. It then aggregates it to be evaluated against the authorization policy.OPA Engine: We use OPA (Open Policy Agent) to periodically download fresh policies and evaluate the policy path mentioned in the gRPC call against the aggregated data.Central policy serviceThe central policy service is a repository of policy bundles (*.rego files) which are independently managed by the domain microservices. The maintainers of the domain microservices create these policies for various resources that need authorization. Since these policies only involve rules, it greatly increases the visibility of authorization rules being applied to a particular resource.Auth SDKThe auth-sdk is an internal library that we developed that helps the developers of upstream microservices to easily communicate with different auth components. It can do the following:Extract user identity and other useful information attached in the request headers by the central authentication serviceDiscover various auth components and streamline communicating with themExpose different helper methods to perform any auth-related activity on behalf of the application serviceRedesigned (new) architecture:Tracing the request lifecycle in our redesigned auth architecture.ConclusionMicroservices-based architectures can help address some of these challenges of monolithic architecture by separating user authentication and authorization into individual services, which can be developed, deployed and maintained independently. This approach can provide greater flexibility, scalability and security for user authentication and authorization.However, it's important to note that transitioning to a microservices-based architecture can also come with some challenges, such as increased complexity and a need for more advanced DevOps practices. Proper planning, implementation and ongoing maintenance are crucial to ensuring a successful transition.

Dec 29, 2022

How React works in a composable architecture

React is a JavaScript library widely frequented by web developers who plan on building composable elements for dynamic interfaces. By default, it is a declarative and flexible framework for altering web and app data without having to refresh the DOM every time.  A React CMS splits the roles of designers and developers, placing them into a front-end or back-end role respectively. React is a collection of designated components used to maintain a structured front end, for performing actions like validating forms, controlling states, arranging layouts and passing in data.  Described as a headless infrastructure, the three main ingredients of a React CMS are React, REST API and GraphQL. These libraries allow you to scale content across many channels and devices by eliminating codebase dependencies that would be prevalent in a traditional CMS environment.   When should you use a React CMS? A React CMS is ideal for editing the elements that users interact with, from buttons to dropdowns on your website. And for organizing larger projects, complex code logic is grouped by matching patterns to help you track the state of apps.  It will update your source code in the DOM to reflect changes in app requirements so the content gets delivered without any compatibility issues. This is achieved by tracking the modified versions of your components to back up your data before the system restarts.  If you prefer something more substantial than drag-and-drop customization, then you should consider getting a React CMS to access native API configurations and code blocks that are fully decoupled from the presentation layer.  This will save you time on having to manually update plugins or extensions, so you can divert resources to creating and deploying the app through its API-based integrations.  Moreover, a React CMS has been shown to improve performance by allocating less memory to track component changes. To get around loading delays, it will use the virtual DOM to render only the assets that are found at the URL.  Instead of receiving just the themes and templates, you have complete control over the content layout for fetching chunks of data from API calls to populate your web pages with the desired elements.  How a React CMS works with APIs to distribute content When React is combined with a CMS, it lets you preview the output of workflows before you publish them onto a page. A React CMS is able to transmit on-demand data between the client and server or even during compilation time, to dictate how objects are rendered on a webpage.  Using a composable model, you can call the API to request information from root directories or databases on the server, dividing your website functionality into closed environments that will not interfere with each other.  From a technical standpoint, React CMSes make it possible to edit visual elements through your site’s HTML, by tying it back to the schema of GraphQL as you fill in the fields or toggle the settings.  It’s also great for patching bugs in your JS bundles that might otherwise lead to delayed page interactions or even downtime on the server. Rather than create a project from scratch, the composable architecture makes it easy to reuse content over multiple channels.  In addition, you can search for third-party integrations on the market to help you build streamlined apps that contribute to the overall React ecosystem. As such, swapping out components is the way to go when your team is pressed for time on the next feature release.  By employing API-first methods, you won’t have to monitor CMS servers or databases in messy clusters, unlike what happens in traditional CMS solutions.  What are the benefits and features of a React CMS? A React CMS ensures the continuous operation of components on the app, giving you composable options to import modules that perform what you need on the client.  Once you understand the fundamental components, it becomes easier to develop and maintain web apps by leveraging just the required functionality to deliver consistent user interactions.  To manage your databases, it utilizes GraphQL to recover queries from the app data in a structured format. As a substitute for REST, GraphQL caches and updates databases with new entries, thereby combining them with Apollo or Axios packages to execute your API requests.  Another aspect is the custom editing experience, which generates dynamic content in an organized manner, so you can avoid a conflict of interest when loading HTML and JSON files in succession.  If you’re looking for a specific feature to implement, such as a login page or shopping cart to enhance the user experience, you can learn about them in detail through the support documentation.  The goal is to stabilize your app’s performance during page loads to improve the accessibility of various media types. To see the CMS in action, you can simply declare the permission and hierarchy of API objects using the default arguments.  But before you map out the visuals, it’s best to have a clearly defined scope of the app by taking measures to scale it in conjunction with your network or server capacity.  Choosing a React CMS to decouple your web services For enterprise workflows, React APIs are a must-have that can shorten the time to market by automatically cleaning content backlogs and preparing for site migrations. Since there are lots of options for React CMSes, you’ll have to narrow down which libraries are capable of handling your app’s payload.  If you want a composable CMS focused on developers, get one that offers a large collection of third-party frameworks or extensions in order to cover all bases of your React app. For example, you may need conditional fields to verify user accounts or support for SQL to join multiple tables containing product details.  Another advantage is being able to override protocol errors or software failures that are detrimental to performance indicators before they end up on the latest build. This ensures development is productive and has room to grow into cross-platform capabilities.  The cost of implementation is well worth it for specialized use cases that cover static site generators, mobile apps and web services. In return, this puts custom assets, webhooks and test scenarios at your fingertips, so you can keep adding integrations with other tools without worrying about the impact on existing code. With headless CMS functionality, you can frame API responses and multiple SaaS solutions around predictable outcomes to close the gap between React and your site content.  Learn more If you would like to learn more about the benefits of a composable architecture, see our article “Why a composable CMS is right for you.”  Schedule a free demo to experience the benefits of a composable CMS with Contentstack’s headless CMS-based digital experience platform.