From legacy systems to microservices: Transforming auth architecture

What_Developers_Should_Know_about_composable_16.9.png

Highlights

You'll learn how to:

  • Leverage modern design patterns: Use contemporary architectural patterns for developing efficient authentication and authorization solutions.
  • Deploy access tokens efficiently: Transfer access tokens from the periphery to individual microservices, and you can ensure effective authentication in a microservices environment.
  • Acknowledge authorization challenges: Developers must understand that while authentication issues are addressed by mature standards like OAuth2 and OIDC, authorization presents unique challenges.
  • Customize access decision managers: Enhance your authorization architecture by personalizing your AccessDecisionManager or AccessDecisionVoter.

Organizations can successfully transform their auth architecture by understanding and implementing these key points. Keep reading to learn more!


Contentstack receives billions of API requests daily, and every request must be validated to be a valid Contentstack identity. It is a common industry practice to achieve this using some sort of “identity token" for every request. Imagine having multiple types of identity tokens, such as session tokens, OAuth tokens, delivery tokens, management tokens, etc.

The problem of securing billions of API requests daily can be challenging. We decided to address this by spinning up a new team that handles the complex problems of user authentication and authorization in a token-agnostic platform.

Our transition journey

Contentstack started as an API-first headless CMS platform that allowed content managers to create and manage content while simultaneously and independently enabling developers to use Contentstack's delivery API to pull that content and render it to create websites and applications. This means that Contentstack’s traffic increases proportionately to the traffic received by our customers' websites and applications.

With increased traffic and usage, we catered to various new use cases by developing new features. These features were powered by a set of microservices, each catering to a particular feature domain and needing support for processing multiple identity tokens that had roles and permissions associated with them. The whole system had turned out to be quite complex, and performing auth had become a great challenge. This prompted us to redesign our auth architecture, which addressed the issues of being a token-agnostic and low-latency platform.

Read on to learn more about this journey and how we have been able to:

  • Transition from a monolith to a low latency microservices-based auth (authentication plus authorization) and rate-limiting architecture.
  • Set up centralized authentication for multiple (any domain) microservices that are part of the same Kubernetes cluster.
  • Set up decentralized and self-serviced, policy-based authorization for internal services and teams.

Increasing feature sets increased domain microservices, which increased the complexity of performing auth.
Increasing feature sets increased domain microservices, which increased the complexity of performing auth.

Monolithic auth architecture

Monolithic architectures can be difficult to maintain, scale and deploy. In a monolithic architecture, user authentication and authorization are typically tightly coupled with the application code, making it difficult to implement and maintain robust security measures. Monolithic architectures often rely on a single authentication and authorization mechanism for the entire application, which can limit the flexibility of the system to accommodate different types of users or access levels.

{{nativeAd:3}}

Performing auth in a typical monolithic architecture.
Performing auth in a typical monolithic architecture.

In monolithic architectures, the steps involved in auth are the following:

  1. Users use their credentials at the client to generate a session token or use an existing identity token to generate other identity tokens.
  2. Users then use the generated identity token to perform a business operation by making a request to the application server.
  3. Once a request is received at the application server, the authentication middleware authenticates the token and forwards the request to the business module.
  4. The business module performs the business operation based on the authorization rules applied to the user identity.

Problems with monolithic auth architecture:

  • Authentication and authorization logic is mixed with the business logic.
  • Changing the way an identity performs an operation on a resource involves a change in the associated auth-related logic.
  • Each domain individually implements the authorization logic, causing a difference in implementation.
  • Since authorization logic is deeply nested in business logic, we lack visibility into authorization rules applied to a resource.
  • Shipping of new authorization logic requires a fresh deployment of the application image.
  • New microservices require knowledge of various identity tokens and resource authorization rules to be applied.

Microservices auth architecture

Microservices offer a more flexible, modular approach that allows for easier maintenance, scalability and deployment. With microservices, each service can be developed, deployed and scaled independently, allowing for faster time-to-market, improved fault tolerance, and better alignment with modern development practices. Additionally, microservices offer more efficient use of resources and better support for diverse technology stacks.

Authentication

Why centralized authentication?

Centralized authentication is a security model in which a central authority manages authentication, such as a server or service, rather than it being distributed across multiple systems or applications. There are several reasons why centralized authentication is commonly used and considered advantageous, including increased security, simplified management, improved user experience and lower costs.

While there are some drawbacks to centralized authentication, such as the increased risk of a single point of failure and increased complexity in managing the central authority, the benefits often outweigh the risks.

Centralized authentication and rate-limiting at the edge of the service mesh.
Centralized authentication and rate-limiting at the edge of the service mesh.

The steps involved in the centralized authentication process are the following:

  1. Any incoming request to the Kubernetes cluster first lands at the Istio ingress gateway.
  2. The request containing the identity token is proxied to a central authentication gRPC service with the help of envoyproxy's external authorization filter.
  3. The central authentication service queries Redis with the identity token and metadata associated with the request.
  4. Redis responds with the identity associated with the token and the current rate-limit count based on the request metadata.
  5. The central authentication service responds to Istio with either of the following:
  6. Authenticated response with user context attached to the request in the form of request headers
  7. Unauthenticated response
  8. Ratelimit exceeded response
  9. An authenticated request containing the user context is then forwarded to the upstream service.

Advantages over the monolithic architecture:

  • Easier to onboard newer microservices to central authentication service by using label based istio-injection.
  • All requests are authenticated and rate-limited at the edge of the service mesh, ensuring that each request entering the cluster is always rate-limited and authenticated.
  • The request forwarded to the upstream microservice has user identity context attached to it in the request headers, which can be further used for applying authorization rules.
  • Keeping centralized authentication eliminates the problem of multiple mutations performed by the upstream microservices on the identity of the token.

Authorization

Centralized authorization

We tried a model where along with authentication and rate limiting, we also added authorization as a responsibility of the central authentication and rate limiting service. The service would first identify the incoming request’s identity from the token and apply rate limiting based on the request metadata. Once the user identity is known, authorization rules could be applied to the user’s identity, thereby performing the entire Auth at the edge of the service mesh.

Problems with this model are the following:

  • This model could only perform basic authorization at the edge based on the request metadata provided, such as validating organizations, stacks, etc. However, it could not perform fine-grained authorization, such as finding out which content types the logged-in user had access to.
  • For RBAC, each domain has its roles and permissions associated with it; performing authorization for such requests requires knowledge of the upstream domain and leads to the addition of domain-specific logic in the centrally managed domain-agnostic platform.
  • With newer domain microservice additions, this again would lead to the problem of lacking visibility into authorization rules applied to a resource.

Distributed authorization with central authorization service

We then tried implementing a model where we distributed authorization to the upstream microservices where each upstream microservice makes a call to a central authorization service. The authorization service has access to all the roles and permissions of different domains and was able to give appropriate authorization results. Authorization could now be performed from the upstream service’s business module by making a network request using Kubernetes cluster networking to avoid making a call over the internet.

Problems with this model are the following:

  • The central authorization service becomes a single point of failure.
  • Any change in the API contract defined by the central authorization service requires all the upstream services to abide by it and makes shipping these changes independently a complex task.
  • Performing authorization adds a network hop, thereby increasing the latency.

Distributed authorization with the sidecar pattern

Learning from the previously discussed disadvantages, we wanted to build a model that had authorization distributed, low latency and made shipping authorization logic an independent activity.

Architecture

The architecture involves the following components:

  • Auth sidecar
  • Central policy service
  • Auth SDK

Architecture for authorizing an authenticated request with the sidecar pattern.
Architecture for authorizing an authenticated request with the sidecar pattern.
{{nativeAd:9}}

Auth sidecar

The auth sidecar is a gRPC service that gets injected along with the microservice’s application container in the same Kubernetes pod. Let’s understand how this architecture helped us tackle the previously mentioned problems.

Single point of failure: The auth sidecar service runs with the application container in the same pod, and any case of failure is only limited to the current pod. Restarting the pod gives us a fresh set of application and auth sidecar containers.

Independent delivery: Since the auth sidecar service container is shipped along with the application container, the application service can decide which version of the sidecar image to use, thereby making the delivery of newer versions of the authorization sidecar independent.

Low latency: There is no network hop involved in making a gRPC call to the auth sidecar running in the same pod. This helps the application to get the authorization result with very low latency (in a few milliseconds).

Updating authorization logic: The auth sidecar periodically downloads fresh policy bundles; any time there is a change in policy bundle coming from the central policy service, the auth sidecar updates its local policy cache with the new bundle.This way, updating authorization logic does not involve a fresh deployment/restart of the application container.

Components involved in auth sidecar

 Responsibilities of the components involved in the authorization sidecar.
Responsibilities of the components involved in the authorization sidecar.

Aggregator: The responsibility of the aggregator is to fetch authorization-related data for the current identity based on the metadata provided by the application service in the gRPC call. It then aggregates it to be evaluated against the authorization policy.

OPA Engine: We use OPA (Open Policy Agent) to periodically download fresh policies and evaluate the policy path mentioned in the gRPC call against the aggregated data.

Central policy service

The central policy service is a repository of policy bundles (*.rego files) which are independently managed by the domain microservices. The maintainers of the domain microservices create these policies for various resources that need authorization. Since these policies only involve rules, it greatly increases the visibility of authorization rules being applied to a particular resource.

Auth SDK

The auth-sdk is an internal library that we developed that helps the developers of upstream microservices to easily communicate with different auth components. It can do the following:

Extract user identity and other useful information attached in the request headers by the central authentication service

Discover various auth components and streamline communicating with them

Expose different helper methods to perform any auth-related activity on behalf of the application service

Redesigned (new) architecture:

Tracing the request lifecycle in our redesigned auth architecture.
Tracing the request lifecycle in our redesigned auth architecture.

Conclusion

Microservices-based architectures can help address some of these challenges of monolithic architecture by separating user authentication and authorization into individual services, which can be developed, deployed and maintained independently. This approach can provide greater flexibility, scalability and security for user authentication and authorization.

However, it's important to note that transitioning to a microservices-based architecture can also come with some challenges, such as increased complexity and a need for more advanced DevOps practices. Proper planning, implementation and ongoing maintenance are crucial to ensuring a successful transition.

Share on

bookmark_border

Executives

Leverage composable tech to drive business forward

Learn

Composable architecture: How to future-proof your business tech stack

Technology is changing at lightning speed along with market trends and customer expectations. Adding more to it, as per a recent survey by the Conference Board, 93% of corporate CEOs are gearing up for a recession over the next 12 to 18 months. The immediate requirement for these leaders is to ensure the resilience of their technology stacks for the future, thereby keeping their businesses ready for a diverse range of unforeseen circumstances. In such a scenario, delivering customized user experience through composable architecture could ensure long-term survival and success.Customer experience is prioritized by 44.5% of enterprises worldwide, as per a 2021 Statista report. Achieving this in today's dynamic environment requires effectively using technology to create and deliver top-notch products and services.To help organizations respond to that sense of urgency, here are five tips to consider when futureproofing your business tech stack.1. Customized User ExperienceBrands, on their composable journey, must focus on offering personalized user experiences, attained using relevant user data right from the start. McKinsey reports that 71% of consumers expect personalized interactions from brands. Besides improving user experience, brands must focus on designing systems that scale with the growing user base and its dynamic functionality needs.2. Coordination with other business unitsContrary to traditional stereotypes, IT does not function in isolation. In fact, to ensure business success, technology initiatives must align with the business's overall strategy and add to the brand's short- and long-term goals. However, transitioning from a monolithic structure to a composable one must be iterative. Once a new technology is integrated within one business unit, it can eventually be rolled out to other units. While preparing for future challenges, the choice of technology will impact multiple business units during a business's transitional phase.To make sure a specific new tech works well for all units, it's essential to plan and think about the opportunities, problems and trends that might occur. This requires effective collaboration across business units. Given the interconnected nature of these departments, proper goal alignment must be ensured to deliver compatible, scalable, flexible and secure results. For seamless and customized solutions across different touchpoints, the chosen technology should be capable of scaling and accommodating new process changes across the organization.3. Constant hyper-personalization and differentiation The importance of customer experience (CX) is highly discussed in the current business ecosystem. Brands are constantly innovating new solutions to thrive in the face of competition. The only way to do that is by adopting highly scalable tech stacks that incorporate speedy change processes. While customers expect a more compelling experience, selecting any technology for the sake of it or, worse, by mimicking other brands will not work and will lead a brand to lose its competitive advantage. Instead, they must adopt best-of-breed components and change stack parts when required, creating the much-required hyper-personalized experiences.4. A flexible approachBusinesses are more unpredictable than ever, increasing the potential stakes for which leaders must be prepared. While tech leaders know that a tech revolution is coming, the exact nature of the change remains unknown. This unpredictability will lead to rapid and diverse market requirements and changes in user preferences. By investing in and leveraging technology, brands can quickly adapt to these changes and make necessary system adjustments. Furthermore, flexible tech is more interoperable, allowing smoother integration with other tools and platforms.5. Prepare your organization for the futureDon't bite off more than you can chew when it comes to composability. Business leaders can decide the number of components they want to switch at a time. Unlike monoliths, composable architectures allow business leaders to determine the number of components they want to change at a time. This makes the shift a lot smoother and much quicker. There is no rush to modernize in haste. With customer needs and industry trends changing dynamically, flexibility in business functionality is the only way forward and achievable through composable architecture. But before getting into the composable journey, organizations must find their motivation and identify their reasons for going composable to deliver a differentiated experience to their audience.

Empowering finance: The composable technology starter-guide

Why composable for finance makes sense (and dollars)The financial services sector, a front-runner in innovation, faces intense competition, from major investment firms to independent banks. When it comes to financial services, today's customers demand agility, security and continuous innovation. To surpass these expectations, the financial world is embracing composable technology for its unparalleled capacity for customization and innovation. Imagine tailoring services to meet each customer's unique needs, staying ahead of evolving regulations, and fostering relentless innovation. Composable Digital Experience Platforms (DXP) are the secret ingredient that fuels this transformative journey for today’s financial services organizations.Benefits of a Composable DXPThis shift offers financial institutions a chance to revolutionize their technology spectrum, driving revenue growth, faster market entry, cost efficiency, enhanced risk management and elevated customer contentment. Through a composable DXP, financial businesses gain the agility to adapt swiftly to market dynamics, personalize customer interactions, unveil new services quickly, and seamlessly integrate innovative solutions to maintain a competitive edge in the ever-evolving financial landscape. So what should financial institutions consider before getting started on their composable journey and how does this shift truly move the needle?Personalized digital experiencesToday’s customers crave personalization. They no longer want to be just another number on a spreadsheet; they want services tailored to their unique needs and preferences. This shift towards personalization isn't just a trend; it has become a necessity in the financial landscape. Composable technology serves as the backbone for this personalized evolution, allowing financial institutions to craft bespoke solutions that resonate with each customer. By leveraging modular components, these building blocks enable financial institutions to design personalized offerings that cater to individual needs. By breaking down services into smaller, interchangeable parts, institutions have the flexibility to mix and match these components, creating dynamic and tailored solutions for their customers. “Integrating a headless CMS into our cloud-native approach allowed us to really optimize edge delivery of a lot of our content… Render times are five times faster when compared to our legacy CMS.”— Clay Gregory | Principal Architect, MorningstarThis composable approach empowers organizations to adapt quickly to changing market demands, stay ahead of the curve and deliver innovative, customer-centric experiences.Improved connectivity, compliance and risk mitigationCompliance and risk mitigation have always been critical in finance. However, the increasing complexity of regulations and the fast-paced nature of financial transactions make these aspects even more crucial.Composable technology serves as a game-changer, not only enabling swift adjustments to comply with regulations but also enhancing risk mitigation strategies. Known for its inherent flexibility, composable technology empowers organizations to seamlessly update their systems to adapt to regulatory changes. This facilitates real-time risk assessment by enabling continuous monitoring and analysis of potential threats. Its modular architecture facilitates the integration of advanced risk management tools and AI-driven analytics. Additionally, it streamlines risk mitigation efforts by providing the agility to swiftly implement necessary controls and measures in response to identified risks. By leveraging composable technology, businesses can proactively identify and address potential risks, predict potential vulnerabilities, and implement preemptive measures, ensuring a robust and secure operational environment.Such a proactive approach not only fortifies the regulatory compliance stance but also bolsters the resilience of financial systems against unforeseen risks, safeguarding the integrity of operations in an ever-evolving regulatory environment.Increased customer engagement: Building loyalty with contentEngaging content is no longer limited to media companies. Financial institutions are recognizing the value in building loyalty and trust among their customers — and nurturing that customer loyalty requires a strategic blend of informative and engaging content. To build lasting relationships and deliver value beyond transactions, more financial services organizations are demonstrating their commitment to customer needs with personalized newsletters, social media, targeted emails and other various channels.But how does composable technology come into play in this context? By enabling seamless integration of various content delivery platforms, it empowers financial firms to create localized and omnichannel content strategies, ensuring meaningful engagement with customers across different channels and regions.This approach enhances the overall customer experience and strengthens the bond between financial institutions and their diverse customer base.“We’re aggressively making changes to the website. We’re trying to draw people in, and we haven’t done that before. We can spin up new pages faster now than they could previously.”— Jason Hagen | Software Architect, Harbor Capital AdvisorsModernizing workflows for today's expectations around agility and innovationAgility and innovation are no longer just buzzwords — they are non-negotiables. Composable technology is a force multiplier when it comes to modernizing workflows and increasing agility. The integration of composable technology not only amplifies agility but also catalyzes a culture of innovation within organizations. With composable technology, teams can streamline operational processes, seamlessly integrate new tools, and optimize collaborative efforts, resulting in enhancements in productivity and efficiency. "We cut out 40% of our tickets by having a CMS where other users can make updates to the website. That 40% is so valuable for us, so we can focus on revenue-driving initiatives and find new ways to get users to engage with our web properties to get more leads in the pipeline for sales. It is a huge advantage for us!"— Kevin Yang | Senior Manager, Digital Experience, ICE Mortgage TechnologyAdditionally, the rapid deployment of new functionalities and enhancements encourages a culture of adaptability, allowing teams to respond swiftly to market shifts and emerging opportunities. Composable technology not only future-proofs operations but cultivates a dynamic ecosystem where innovation thrives, positioning organizations at the forefront of industry advancements.Composable is the key to new growth and revenueIf you are still wondering why composable technology is the future of finance, consider this: it paves the way for new growth and revenue streams.By enabling customization, fostering innovation, enhancing compliance and improving customer engagement, composable technology helps financial institutions tap into previously unexplored opportunities. Composable Digital Experience Platforms are not just about keeping up with the times; they are a linchpin of progress. Composable DXPs embody modernization, propelling businesses forward in a landscape defined by agility, innovation and customer-centricity. Embracing these platforms isn't just about staying relevant; it's about reimagining your digital experiences to thrive in an era where adaptability, personalization and swift evolution are paramount. Are you ready to not just meet but exceed customer expectations?Get started today.

Composable commerce: Best-in-class tools for the job

Many people today use the phrase "composable commerce" — including monolith vendors like Adobe and Shopify. As the composable commerce space has matured, and as more and more brands have seen the value of a system that lets you leverage best-of-breed microservices for your brand needs, it makes sense that legacy tech platforms would want to carve out a piece of the composable pie. But the reality is, a monolith can never truly be composable. If you're on Adobe, you may be able to leverage a handful of third-party services with relative ease—but only ones that have been approved and integrated by them. You’re still locked into their ecosystem, and your ability to make changes and update your commerce experiences is driven by their feature development and priority list. True composability is about breaking down those barriers and putting control in your hands.What is composable commerce?What sets pure-play composable commerce apart? At its core, real composability involves component-based architecture, cloud-native infrastructure and API-first connectivity. This means that modular capabilities can be mixed and matched, scaled, iterated, and swapped as needed. Instead of an all-in-one toolset, brands access integrated microservices via APIs.Cloud-native infrastructure provides the foundation for this plug-and-play extensibility. Containered services scale automatically, while APIs enable headless commerce functionality alongside other capabilities. As capabilities expand, composable stacks stay cohesive yet cutting-edge.“The ability to curate your commerce experience using best-in-breed microservices, with access to the tools you want and need—nothing more, nothing less—is a compelling argument for modern retail and commerce brands,” notes Jason Cottrell, CEO of Orium. The market moves quickly. Brands need to be able to move alongside it.Benefits of composable commerceModular composability centered on APIs and the cloud provides:Targeted personalization: Leverage real-time data for contextualized messaging connecting commerce and contentContinuous experimentation: Rapidly test and scale what resonates without significant liftsFaster innovation cycles: Plug emerging engagement channels into your stackIf a new, better alternative comes on the market, swapping it out won't be an option. When you lock into a monolithic platform, your options will always be limited by their platform. With a truly composable stack based on MACH technologies, you'll be able to leverage the best solutions in the market for your needs. For example, if you need to understand user behavior (and you do), quantumetrics is an industry-leading solution that can be implemented into a modular composable architecture with relative ease. The same can't be said for monolithic counterparts.Overcoming implementation challengesSome brands hesitate to adopt composable commerce, fearing overly complex implementations involving stitching together disparate systems. However, MACH’s open APIs and microservices architecture streamline integrations. Composable also offers the freedom to work with preferred agency partners versus being locked into an agency ecosystem dictated by a suite vendor.With expert guidance, brands can launch composable stacks rapidly. Many even realize faster time-to-market versus monolithic solutions given leaner, more lightweight systems. Vendor-agnostic flexibility also allows engaging your preferred system integrator partners to streamline rollout.“As the space has evolved, moving to a MACH-based architecture has become easier. The emergence of accelerators, like Orium’s Accelerator, can smooth the process for brands, speeding time to first value without sacrificing the flexibility and scalability of a fully composable system,” notes Cottrell.Separating composable leaders from followersIncreasingly, legacy platforms now pay lip service to composability but lack the cloud DNA and API foundations required. Their tools remain a walled garden, restricting the versatility that authentic composable architecture provides. Even with approved partners, integration is complex, expensive and provides minimal capability.Forward-looking brands opt for these purpose-built composable commerce technologies to future-proof innovation potential. With composability anchored fundamentally in the cloud and powered entirely by APIs, curating cutting-edge yet cohesive stacks becomes simple — an unmatched advantage.ConclusionComposable commerce delivers instantly extensible, best-of-breed stacks aligned to business goals. In 2024 and beyond, composable architecture offers unmatched adaptability to address digital experience challenges through continuous experimentation powered by specialized tools. It lays the foundation for optimized customer journeys that convert. This unmatched advantage makes composable the obvious choice for digital experience success.