Go back

4 ways your teams can benefit from a composable DXP

Blog.4WaysTeamsBenefit.png

Whether you’re a company leader, developer or a creative director, chances are that you understand the importance of having good content on your website and other communication channels that your organization leverages. If you’re like most mid-sized to large companies, you have a complex mix of content that’s used for diverse purposes: marketing and promotions, internal communications and investor relations, delivering personalized customer experiences, engaging potential customers and more.

Traditionally, having relevant omnichannel content has been disjointed, time-consuming, difficult to manage, slow and inefficient. Compounding these issues is the accompanying frustration from developers who are leaned on to edit code when any little thing needs to change, and from marketers who can’t get updates made fast enough.

Fortunately, there’s a much easier and streamlined way to manage and publish content these days with digital experience platforms (DXPs) built with composable architecture and headless content management systems (CMSes). An increasing number of organizations are transitioning to this type of system for benefits including agility, speed and scalability. Last year, Gartner predicted that more than half of mainstream organizations would invest in composable applications by 2023.

Before delving into the benefits of composable, let’s first take a look at what a DXP built on a composable architecture actually is.

What is composable architecture?

Composable architecture is a way of separating the front-end (what you see on the display) and the back-end code (development) of a website, making development faster and easier. This separation means the front and back end can be developed independently of each other, making deployments simpler and more efficient.

A composable architecture typically has a headless CMS at its core. This type of CMS provides an application programming interface or API that the front-end code can call to fetch data from the back end.

 What kind of tools or APIs are used in a composable DXP?

In addition to the headless CMS, which is the central hub of the composable DXP, this type of platform will include a wide variety of microservice-based APIs based on what your organization needs. The beauty is that you can pick and choose the best options in each of these areas below in addition to others without being locked to a specific vendor:

  • E-commerce
  • Asset management
  • Customer management
  • Omnichannel management
  • Marketing automation and analytics
  • Content workflows
  • Customer engagement
  • AI tools

In a nutshell, composability means you have the freedom and flexibility to create a unique DXP that’s tailored specifically to your organization’s needs by choosing the right microservices. You might think of these microservices as being an arsenal of tools that can help you elevate your organization above the competition.

If the idea of switching from a traditional, monolithic platform to a composable DXP seems daunting at first, keep in mind that the transition doesn’t have to take place all at once. Instead, it can take place one piece (or API) at the time as you add different products and services to the headless CMS. Compatibility enables this kind of targeted transition because each component or API works independently of every other component. As you might imagine, this has many advantages. One of the biggest is that a failure in one component doesn’t bring down the whole system.

A composable DXP provides many significant benefits for your organization’s executive, creative and technology teams. Here are four key features of composable DXP and how each team benefits.

Very little to no coding needed

With a composable DXP, most changes don’t require the technical knowledge of a developer. Here’s how this benefits teams at every level of your organization.

Executive teams

When marketing and technology teams can focus on what they do best, there should be less friction between the two. This reduces frustration levels and makes for happier employees, helping you retain your best workers.

Creative teams 

Composability will empower marketing teams to create, change and publish content without having to have any technical expertise. Content is easy to access in one central location. Marketing teams will no longer have to create tickets and wait for developers to get around to their requests. Instead they’ll create campaigns and push a variety of content types to multiple platforms and channels with greater speed and efficiency.

Technology teams

The time developers typically spend making everyday fixes and working with code to launch new campaigns will be freed up so that they can focus more time on creating user-friendly digital experiences for customers.

Scalability

Do you plan on adding e-commerce down the road? Want to add a mobile channel? Want your website to have chat functionality? It’s very easy to add new apps and services to your websites and other channels with a composable DXP.

 Executive teams

The business can more easily expand its product and service offerings without having to worry about downtime for websites and other channels. You can focus on growing the business with confidence that your content management system has the agility to keep up. 

Creative teams

As new marketing automation and tools become available, it will be simple to add these to your API mix.

Technology teams

It will be easier for IT to scale apps because services can be deployed independently. Tech can focus on one type of digital service, while others continue to work as normal. There’s no need for rushed overnight deployments or site downtime to release new functionality.

Speed

Composability improves speed in several different ways, including speed of publishing content, speed of implementing campaigns and speed of reaching business goals.

Executive teams

Business goals can be fulfilled faster, whether you aspire to expand into a new region or roll out new products and services. What better way to stay a step ahead of the competition?

Creative teams

Marketing leaders will be empowered to launch campaigns and publish content much faster. Again, there’s no waiting on IT to make changes. They can also push content to multiple sites without having to totally recreate content from scratch. Composability makes it easier to create a content block for one site, and then quickly push that content to other sites and channels.

Technology teams

Slow implementations become a thing of the past, as IT teams focus their efforts on targeted API functionality, rather than being bogged down with tickets for minor edits and updates.

Improved customer experiences

When relying on a composable DXP, delivering content that’s personalized and relevant becomes the status quo instead of the exception, boosting customer satisfaction. 

Executive teams

The business can expect to reap the rewards of improved customer experiences. A current Forrester Total Economic Impact (TEI) study demonstrates an ROI of 295% with a composable architecture.

Creative teams

Marketers will no longer be hindered by the rigidity of a monolithic CMS. Instead, they will have unlimited access to all the tools they need for success with the freedom to expand their toolkit any time they choose.

Technology teams 

With less time spent on repetitive requests, the IT staff can put its expertise to work in key areas which will have the biggest impact on customer satisfaction.

FAQs

As a recap and to answer additional questions you may have, here are a few frequently asked questions about composable DXPs.

Am I tied to one vendor that determines what solutions I can use?

No, a composable DXP gives you the freedom to choose the best solutions, regardless of vendor.

How do I know all the components that I want in my composable DXP will work together?

Composable providers understand the importance of their solutions being able to integrate with other APIs and have worked to address this issue. Composable providers ensure their solutions can seamlessly enable multiple APIs to work together by making them easy to plug in with software developer kits (SDKs) or one-click connections.

What if I want to keep tools on my current websites that are working?

With a composable DXP, an organization can choose the best options and even keep using some of the existing solutions that are already working. You are no longer locked into using just the services and apps that your vendor or platform supports.

What is the first step in transitioning to a composable DXP?

Begin by thinking about the apps and services you would want to have in your DXP if the options were limitless and then write them down. Be sure to get input from executive, creative and IT teams before searching for products and scheduling demos.

Learn more

Learn more about composable DXPs in our guide, “What is a DXP? Understanding digital experience platforms.”

Schedule a free demo to see how Contentstack’s composable digital experience platform can benefit executive, creative and technology teams at your organization.

You may find interesting

Learn how to drive business forward and build better customer experiences.

3 ways tech and business teams can help each other through a transformation

Not all great tech transformations are brought to the table by a developer, engineer, CTO or other technical person. We see great projects kick off because someone in marketing, sales or customer success raises the flag for change. Sometimes business people can see opportunities that aren’t as plain to the tech teams.That’s what happened when Booking.com decided to transition off its old systems to a headless CMS. Juliette Olah, senior manager of Editorial, realized that her teams had produced thousands of pieces of content over the years — but the capabilities of their current technology significantly limited the value that content produced in their local markets and possibilities for the future.Listening to her “People Changing Enterprises” episodes, I admired the way Juliette united Booking.com’s product and editorial teams from the beginning to pull off their transformation. Business and tech can either be each other’s biggest advocates or frustrating roadblocks.To avoid the latter, these are three examples of how tech and business teams can support each other throughout a transformation.Thoroughly debrief at the onsetAt the beginning of every project, we encourage organizations to sit down with their cross-functional teams and level set. Business and tech have their own KPIs and goals to achieve. In a project that bridges the two, there should be a frank discussion, ending in clear, written requirements of process and goals for both sides.Once Juliette realized a tech transformation was the answer to the editorial team’s needs, she became the living bridge between the editorial and product teams. Sitting down with tech stakeholders, they talked through what Juliette called a “comprehensive 360 view of the benefits to the technical side of the platform”:The editorial team’s strategy and the justification behind the new technologyReal-life examples of what execution would look likeThe business value of a central headless CMS would bring to each local marketOpportunities they were currently missing out on because of their current toolBecause she had clearly done her homework and demonstrated the need on both sides, the product team was eager to get started.Partner up to find and test new toolsFinding and testing new tools is an easy way for business and tech teams to partner effectively. When the new CMS was in place, the teams at Booking.com partnered to try out the new tool to make sure it worked for both sides.At Contentstack, once a new solution is initially developed, we pull in our business partners for User Acceptance Testing. They can test, catch bugs, or point out which workflows function trickier than anticipated — versus the tech teams doing it all themselves.Additionally, when you’re on the hunt for new tools, tag a business partner in for their opinion. From a different mindset, they might be able to raise questions or point out benefits you didn’t consider. Work together to phase out what isn’t neededA transition to composable is the perfect time to evaluate what tools you’re bringing into the new environment — and which ones you should retire. This is another area that tech and business teams should work together.A few years ago, we bought an analytics tool for the organization to use on their reports. It was low-cost and met some of our needs upfront, so we decided to take a chance on it. Six months down the road, we were spending a huge amount of time trying to force the tool to work. When a business person came to me and admitted the tool wasn’t helping their team meet their objectives, we decided to look into something else. On a composable project, it’s not always clear on the back end if a tool is working for teams as they need. That’s where our business partners come in. It’s a partnership.Babe Ruth once said, "The way a team plays as a whole determines its success. You may have the greatest bunch of individual stars in the world, but if they don’t play together, the club won’t be worth a dime."It’s easy for business and tech teams to work in silos, but working together produces more value. Especially in a tech transformation, the two teams are different sides of one coin. Find ways to bridge the gap and you’ll see much more value in your resulting tech stack.

How switching to a composable DXP will affect security

The top priority for any business is protecting sensitive information from cyberattacks, and the effectiveness of your cybersecurity measures largely depends on your tech stack. There are a number of  benefits of going composable, and a key one is that composable DXPs can offer better security than monolithic solutions. Read on to learn: How going composable can improve your organization’s cybersecurity  What you need to know to make your composable tech stack as secure as possible What is composable architecture? Composable architecture breaks down the large and complex functions found in monolithic solutions into smaller, more manageable pieces. An API acts as the go-between for these smaller pieces, allowing them to communicate and transfer information more efficiently. In a composable CMS, the front-end and back-end layers are decoupled, so changes can be made to the front end independent of back-end functions.  There are a variety of benefits of moving to a composable DXP, including reduced IT costs, more streamlined processes and functions, easier updates and, when properly implemented, better security. What are the biggest threats businesses face? Cyberattacks have always posed a risk to businesses, but that threat has grown in the past decade. Businesses have beefed up their cybersecurity measures when it comes to some of the more common threats like phishing and malware; unfortunately, hackers have responded by developing more sophisticated cyberattacks that are harder to spot — and more difficult to guard against.  Today, businesses face a slew of cybersecurity threats. Ransomware attacks hold entire networks hostage. Endpoint attacks are on the rise, thanks to the shift toward remote work and, in turn, the number of off-site Internet of Things (IoT) devices connected to business systems. Supply chain attacks exploit security weaknesses in third-party vendors or providers to gain access to their partners’ systems. And even though we are better trained to spot phishing attempts and avoid malware, these strategies still work often enough that hackers continue to use them.  Composable DXPs provide the flexibility to employ cutting-edge cybersecurity measures to protect against cyber attacks and data breaches. The security benefits of going composable A strong cybersecurity strategy is especially important with composable DXPs. As noted above, a composable approach offers the ability to break the large, single-suite functions of monolithic platforms into smaller components. This allows for more customization options, as organizations can pick and choose the specific programs and functions they need to deliver a top-tier digital experience. But each individual piece has its own security requirements and vulnerabilities, and your cybersecurity strategy needs to account for all these differences so there are no holes to exploit.  When moving to a composable DXP, a key first step is to define your security needs and identify the security tech stack that best meets those needs. This will serve as the foundation of your cybersecurity framework, and all the functionality that follows needs to fit within it. The benefit is that it makes it much easier to identify and isolate any vulnerabilities in your security. With monolithic systems, spotting security risks or finding the source of a breach means combing through the entire system. With a composable DXP, it’s much faster and easier to go through each individual function and make the necessary adjustments to secure your system.  How to properly secure your composable tech stack Breaking out functions into individual components with a composable DXP solution creates more endpoints that can be vulnerable to cyberattacks. But even though there are more potential points of access, there are also more ways to secure your systems.  API management platforms make it easy to track API usage and integrate up-to-date security protocols like OAuth and OpenID. That allows you to control who can access and use critical applications and data stored in cloud services, and with authentication processes to verify user IDs, you can catch any security threats before a breach occurs. To secure your composable DXP, these functions are essential: End-to-end encryption Access controls  Authentication: Encryption keys; 2FA; securing IoT devices Data protection Detailed monitoring Implementing these functions and tailoring them to the unique needs of your composable DXP helps ensure that the sensitive data in your platform is protected from cyberattacks. Data security in your composable DXP When it comes to brand interactions, today’s consumer expects a personalized experience, but in order to create a robust customer journey, you need to gather data about your customers. Consumers are willing to provide that data if it means a better digital experience — but they also expect that their sensitive information will be safe in your hands.  The financial cost of a data breach can be massive, but it’s nothing compared to the damage your organization’s reputation will suffer if your customer data is exposed due to a security breach. Fortunately, your composable DXP strategies can help provide better data security.  With a monolithic system, if your critical infrastructure is breached, all your customer data is exposed. A composable DXP allows you to create modular data pipelines that connect to each individual component and the relevant data, rather than a single large block that contains all your data, as is the case with legacy systems. With composable, you can scale up or down and implement or remove components based on your security needs. And if a data breach does occur in one component, the scope of the data exposure is usually limited. Securely meeting consumer demands The customer experience is delivered across different parts of your composable DXP, from your headless CMS to your marketing stack — and it all needs to be supported by a robust cybersecurity strategy that meets or exceeds industry standards. Cybersecurity threats come in all shapes and sizes, and cyberattacks can come from anywhere. To combat those threats and protect your system, your cybersecurity strategy needs to address all the potential risks. Your technology also needs to be flexible and adaptable in order to guard against new threats as they arise. Going composable allows you to build your tech stack to match your security strategy, and vice versa. It’s important to remember that ensuring a safe and secure experience goes beyond adding security protocols to your tech stack. Rather, it’s about deploying the right technologies and data protection programs and practices for the unique needs of your organization.  Learn more Learn more about composable architecture in our blog post, “Why composable architecture is the future of digital experience.” Schedule a free demo to learn how Contentstack can help you create a secure composable DXP solution that best suits your organization’s needs.  

From legacy systems to microservices: Transforming auth architecture

Contentstack receives billions of API requests daily, and every request must be validated to be a valid Contentstack identity. It is a common industry practice to achieve this using some sort of “identity token" for every request. Imagine having multiple types of identity tokens, such as session tokens, OAuth tokens, delivery tokens, management tokens, etc. The problem of securing billions of API requests daily can be challenging. We decided to address this by spinning up a new team that handles the complex problems of user authentication and authorization in a token-agnostic platform.Our transition journey Contentstack started as an API-first headless CMS platform that allowed content managers to create and manage content while simultaneously and independently enabling developers to use Contentstack's delivery API to pull that content and render it to create websites and applications. This means that Contentstack’s traffic increases proportionately to the traffic received by our customers' websites and applications.With increased traffic and usage, we catered to various new use cases by developing new features. These features were powered by a set of microservices, each catering to a particular feature domain and needing support for processing multiple identity tokens that had roles and permissions associated with them. The whole system had turned out to be quite complex, and performing auth had become a great challenge. This prompted us to redesign our auth architecture, which addressed the issues of being a token-agnostic and low-latency platform.Read on to learn more about this journey and how we have been able to:Transition from a monolith to a low latency microservices-based auth (authentication plus authorization) and rate-limiting architecture.Set up centralized authentication for multiple (any domain) microservices that are part of the same Kubernetes cluster.Set up decentralized and self-serviced, policy-based authorization for internal services and teams.Increasing feature sets increased domain microservices, which increased the complexity of performing auth.Monolithic auth architectureMonolithic architectures can be difficult to maintain, scale and deploy. In a monolithic architecture, user authentication and authorization are typically tightly coupled with the application code, making it difficult to implement and maintain robust security measures. Monolithic architectures often rely on a single authentication and authorization mechanism for the entire application, which can limit the flexibility of the system to accommodate different types of users or access levels.Performing auth in a typical monolithic architecture.In monolithic architectures, the steps involved in auth are the following:Users use their credentials at the client to generate a session token or use an existing identity token to generate other identity tokens.Users then use the generated identity token to perform a business operation by making a request to the application server.Once a request is received at the application server, the authentication middleware authenticates the token and forwards the request to the business module.The business module performs the business operation based on the authorization rules applied to the user identity.Problems with monolithic auth architecture:Authentication and authorization logic is mixed with the business logic.Changing the way an identity performs an operation on a resource involves a change in the associated auth-related logic.Each domain individually implements the authorization logic, causing a difference in implementation.Since authorization logic is deeply nested in business logic, we lack visibility into authorization rules applied to a resource.Shipping of new authorization logic requires a fresh deployment of the application image.New microservices require knowledge of various identity tokens and resource authorization rules to be applied.Microservices auth architectureMicroservices offer a more flexible, modular approach that allows for easier maintenance, scalability and deployment. With microservices, each service can be developed, deployed and scaled independently, allowing for faster time-to-market, improved fault tolerance, and better alignment with modern development practices. Additionally, microservices offer more efficient use of resources and better support for diverse technology stacks.AuthenticationWhy centralized authentication?Centralized authentication is a security model in which a central authority manages authentication, such as a server or service, rather than it being distributed across multiple systems or applications. There are several reasons why centralized authentication is commonly used and considered advantageous, including increased security, simplified management, improved user experience and lower costs. While there are some drawbacks to centralized authentication, such as the increased risk of a single point of failure and increased complexity in managing the central authority, the benefits often outweigh the risks. Centralized authentication and rate-limiting at the edge of the service mesh.The steps involved in the centralized authentication process are the following:Any incoming request to the Kubernetes cluster first lands at the Istio ingress gateway.The request containing the identity token is proxied to a central authentication gRPC service with the help of envoyproxy's external authorization filter.The central authentication service queries Redis with the identity token and metadata associated with the request.Redis responds with the identity associated with the token and the current rate-limit count based on the request metadata.The central authentication service responds to Istio with either of the following:Authenticated response with user context attached to the request in the form of request headersUnauthenticated responseRatelimit exceeded responseAn authenticated request containing the user context is then forwarded to the upstream service.Advantages over the monolithic architecture:Easier to onboard newer microservices to central authentication service by using label based istio-injection.All requests are authenticated and rate-limited at the edge of the service mesh, ensuring that each request entering the cluster is always rate-limited and authenticated.The request forwarded to the upstream microservice has user identity context attached to it in the request headers, which can be further used for applying authorization rules.Keeping centralized authentication eliminates the problem of multiple mutations performed by the upstream microservices on the identity of the token.Authorization Centralized authorizationWe tried a model where along with authentication and rate limiting, we also added authorization as a responsibility of the central authentication and rate limiting service. The service would first identify the incoming request’s identity from the token and apply rate limiting based on the request metadata. Once the user identity is known, authorization rules could be applied to the user’s identity, thereby performing the entire Auth at the edge of the service mesh. Problems with this model are the following:This model could only perform basic authorization at the edge based on the request metadata provided, such as validating organizations, stacks, etc. However, it could not perform fine-grained authorization, such as finding out which content types the logged-in user had access to.For RBAC, each domain has its roles and permissions associated with it; performing authorization for such requests requires knowledge of the upstream domain and leads to the addition of domain-specific logic in the centrally managed domain-agnostic platform.With newer domain microservice additions, this again would lead to the problem of lacking visibility into authorization rules applied to a resource.Distributed authorization with central authorization serviceWe then tried implementing a model where we distributed authorization to the upstream microservices where each upstream microservice makes a call to a central authorization service. The authorization service has access to all the roles and permissions of different domains and was able to give appropriate authorization results. Authorization could now be performed from the upstream service’s business module by making a network request using Kubernetes cluster networking to avoid making a call over the internet.Problems with this model are the following:The central authorization service becomes a single point of failure.Any change in the API contract defined by the central authorization service requires all the upstream services to abide by it and makes shipping these changes independently a complex task.Performing authorization adds a network hop, thereby increasing the latency.Distributed authorization with the sidecar patternLearning from the previously discussed disadvantages, we wanted to build a model that had authorization distributed, low latency and made shipping authorization logic an independent activity. ArchitectureThe architecture involves the following components:Auth sidecarCentral policy serviceAuth SDKArchitecture for authorizing an authenticated request with the sidecar pattern.Auth sidecarThe auth sidecar is a gRPC service that gets injected along with the microservice’s application container in the same Kubernetes pod. Let’s understand how this architecture helped us tackle the previously mentioned problems.Single point of failure: The auth sidecar service runs with the application container in the same pod, and any case of failure is only limited to the current pod. Restarting the pod gives us a fresh set of application and auth sidecar containers.Independent delivery: Since the auth sidecar service container is shipped along with the application container, the application service can decide which version of the sidecar image to use, thereby making the delivery of newer versions of the authorization sidecar independent.Low latency: There is no network hop involved in making a gRPC call to the auth sidecar running in the same pod. This helps the application to get the authorization result with very low latency (in a few milliseconds).Updating authorization logic: The auth sidecar periodically downloads fresh policy bundles; any time there is a change in policy bundle coming from the central policy service, the auth sidecar updates its local policy cache with the new bundle.This way, updating authorization logic does not involve a fresh deployment/restart of the application container.Components involved in auth sidecar Responsibilities of the components involved in the authorization sidecar.Aggregator: The responsibility of the aggregator is to fetch authorization-related data for the current identity based on the metadata provided by the application service in the gRPC call. It then aggregates it to be evaluated against the authorization policy.OPA Engine: We use OPA (Open Policy Agent) to periodically download fresh policies and evaluate the policy path mentioned in the gRPC call against the aggregated data.Central policy serviceThe central policy service is a repository of policy bundles (*.rego files) which are independently managed by the domain microservices. The maintainers of the domain microservices create these policies for various resources that need authorization. Since these policies only involve rules, it greatly increases the visibility of authorization rules being applied to a particular resource.Auth SDKThe auth-sdk is an internal library that we developed that helps the developers of upstream microservices to easily communicate with different auth components. It can do the following:Extract user identity and other useful information attached in the request headers by the central authentication serviceDiscover various auth components and streamline communicating with themExpose different helper methods to perform any auth-related activity on behalf of the application serviceRedesigned (new) architecture:Tracing the request lifecycle in our redesigned auth architecture.ConclusionMicroservices-based architectures can help address some of these challenges of monolithic architecture by separating user authentication and authorization into individual services, which can be developed, deployed and maintained independently. This approach can provide greater flexibility, scalability and security for user authentication and authorization.However, it's important to note that transitioning to a microservices-based architecture can also come with some challenges, such as increased complexity and a need for more advanced DevOps practices. Proper planning, implementation and ongoing maintenance are crucial to ensuring a successful transition.

Join the conversation

Community

Join the Contentstack Community to find more answers and help from experts and users.

Join

For executives

Leverage composable tech to drive business forward

Learn

For digital leaders

Learn how to deliver better digital experiences, faster

Learn

For developers

Learn how to build better digital experiences

Learn