Varia Makagonova

Varia is Director of Marketing at Contentstack.

Posts by Varia Makagonova

Sep 24, 2021

Project CUE: Omnichannel Personalization in 12 Weeks

Personalization is tough. So tough, in fact, that Gartner predicts 80% of marketers will abandon their personalization efforts by 2025. That’s why we set out to prove there’s now a practical path to omnichannel personalization that’s powered by best-in-class enterprise tools. To demonstrate what’s possible with this modern approach, Contentstack, Uniform, and EPAM teamed up to launch Project CUE, an agile personalization showcase. In just 12 weeks, the Project CUE team created a personalized booking experience for guests of a fictional Las Vegas resort that offers entertainment and dining packages alongside a visitor’s stay. As part of the personalized experience for “Balbianello resort,” web and mobile content adapts to real-time visitor behavior, such as browsing events, clicking on marketing emails, or taking a quiz. When booking tickets or a table, the guest list can be added to recommend additional events that match party size, schedule, and available guest preferences, as well as to ensure everyone has access to a personal itinerary of bookings and travel information. With a modern approach to personalization, a lot can happen in three months. How We Did It: Flexible Tools for Efficient Teams Using Contentstack’s headless content management, Uniform’s personalization engine, and EPAM’s design and development expertise, the Project CUE team was able to show that empowering developers and businesses teams with easy-to-use tools offers a fast-track to personalization. MACH Architecture A MACH approach to digital architecture (microservice-based, API-first, cloud-native, headless) is key to modern personalization. MACH tools are designed to support a composable enterprise where every component is pluggable, scalable, replaceable, and can be continuously improved to meet evolving business needs. The composability of Contentstack and Uniform allowed the team to leverage quick starts for both solutions and get a functioning website up and running in the first week of development sprints. This ease of integration, made possible by MACH architecture, lets companies create proof of concept projects with minimal resources that can be quickly iterated on and scaled up as value is proven. Parallel Work Project CUE relied on a team that spanned three companies and multiple time zones, so tools needed to be highly collaborative and ensure global team members could move at full speed. The modularity of MACH technologies allows teams to build and manage different experience components in parallel, meaning projects are no longer held up by a waterfall of dependencies. For Project CUE, this meant that content creators could map content personalization, designers could create wireframes, and developers could work on the frontend simultaneously and check in once a week to update everyone on progress. For a more in-depth look at the collaboration involved, check out the development and architecture overview or watch the full Project CUE session presented at DX Summit. Intuitive Personalization For personalization to be practical, it shouldn't require a PhD in rules to work. Uniform and Contentstack integrate to let marketers create content and intuitively manage personalization from the same place. Simple metadata tags can be added to each piece of content to map it to visitor interests, with the ability to add and remove tags easily as the strategy evolves. Instead of locking visitors into a single persona, Uniform’s personalization engine uses real-time behavior to identify the current intent a visitor has, such as “booking tickets to a show” or “finding gourmet dining”, and uses the metadata tags to surface the right pieces of content in the right context. What It Proves: High-performing Experience on Any Channel For personalization to drive results, speed is not only critical for time to market but also for content delivery. Both Contentstack and Uniform are built for enterprise scale and demands, ensuring a high performing experience across every touchpoint. Headless Content Contentstack decouples content from its presentation (i.e., its head) and stores content as modular blocks that can be created once and delivered to any channel via APIs. This API-first approach delivers content quickly, since it’s not weighed down by frontend code, and the ability to update individual blocks of content without having to reload the entire page enables highly efficient personalization. Edge-based Personalization Uniform uses a globally distributed content delivery network (CDN) to move personalization as close to the user as possible, as opposed to having to continuously call back to a central server. Personalization happens client-side, using data such as device characteristics, location, and visitor behavior to identify a visitor’s intent and adapt content in real-time. Unlike third-party personalization tools that rely on render-blocking scripts to work, Uniform runs with minimal Javascript to cut content delivery time well below industry benchmarks such as Google’s Core Web Vitals. Progressive Web App One way that Project CUE leverages the joint power of Contentstack and Uniform is with a PWA for the mobile experience. Visitors get the look and feel of an app from the convenience of a mobile browser, with content and personalization data seamlessly shared between web and mobile. Where to Start: A Practical Approach to Personalization The flexibility of MACH tools means that companies can start implementing personalization without needing to rip and replace their current technology landscape. Teams can start experimenting with personalization in one part of the experience, such as a microsite, a specific brand, or a certain stage of the customer journey and use those successes to gain momentum and scale personalization across the business. To learn more about jumpstarting agile personalization in your organization, check out our recent solution brief for the 3 new rules of personalization.

Aug 05, 2021

Content + Commerce: Meet Modern Consumer Demands With AI Search and Omnichannel CMS

In our post-pandemic world, consumer expectations for ecommerce are shifting rapidly. Customers are demanding a high level of access and digitization, and the majority believe the experience a brand provides is just as important as the products they sell. What does this mean? That it’s not just enough for enterprise ecommerce organizations to pivot away from outdated digital experiences — they must also develop the omnichannel content management capabilities necessary to deliver accurate, personalized, and speedy interactions. Creating a modern commerce strategy means brands need technology that caters to shoppers at every stage of their journey, not just the shopping cart. As a recent report from Gartner Research, Composable Commerce Must Be Adopted for the Future of Applications, puts it: “A growing reality for digitally mature organizations is that digital commerce does not stand alone and should no longer be a monolithic silo of engagement. This goes beyond consistency across channels to mean a unified end-to-end customer journey, including engagement and post sales relationships and support. The simplicity of the ‘e-commerce’ go-to-market is being challenged via requirements that go beyond the traditional core buying journey.” If your brand is among the many that are eager to modernize the experience, but are hesitant about the time and cost of ecommerce transformation, starting with a headless content and search solution is a way to see immediate results without undertaking an exhausting replatforming project. The Problem: Your Traditional Ecommerce Solution Is Keeping You from Moving with the Market Your brand has likely already seen a shift in client expectations. But despite cranking out more content, adding more products, and maybe even reducing prices — you just can’t seem to gain the traction you need. Giving your customers more to choose from is great, but with unlimited choice at their fingertips, what online shoppers really need from your brand is help discovering the products and content that are perfect for them. Modern consumers want an immersive, omnichannel ecommerce experience that delivers the just-right content at the just-right time, whether they’re starting a transaction via smart speaker or completing it via smartphone. The problem is, the monolithic ecommerce tools many brands work with simply cannot support these kinds of interactions. Traditional suites are inflexible and struggle to merge content marketing, merchandising, and product catalog information to deliver a modern shopping experience. And trying to force them to do so often requires an awfully expensive duct tape project — one that is likely to bust your budget and risk downtimes that could lose millions in sales. Ecommerce teams don’t have the luxury of pausing business to revamp their technology, and they need a solution that provides the freedom to transform the experience without a major replatform. The Solution: Quickly Upgrade Your Shopping Experience with Headless Content and AI Search A recent movement among technology leaders is helping brands ramp up engaging commerce experiences without having to replace their core transactional engine. That movement is MACH — a term that describes technology that is microservice based, API-first, cloud-native, and headless. By layering a MACH-first headless content management system (CMS) and product discovery engine over your existing technology, you can quickly provide a modern, content-driven ecommerce experience that your customers will love. Contentstack and Constructor, both certified vendors in the MACH Alliance, are joining forces to showcase just how quickly brands can create value with a modern content and commerce experience. The collaborative proof-of-concept project combines Contentstack’s agile, headless CMS with Constructor’s AI-driven eCommerce search to offer highly engaging, personalized product discovery. “When somebody searches for pants, your goal is not just to come back with a pair of pants. You want to show the pants that are most attractive to that particular person at that particular time,” says Eli Finkelshteyn, Founder and CEO of Constructor. “That’s more than basic relevance or keyword matching, it’s really understanding the user, the products, and the search context in a way that isn’t possible with the legacy platforms that were invented before AI really took off.” Together, Contentstack and Constructor create content and ecommerce workflows that empower cross-functional teams to launch cohesive, personalized, and omnichannel campaigns that convert leads into life-long brand evangelists.  And the best part about this pairing is how easy it is to integrate it into your current ecommerce platform without breaking the bank or your developers’ backs. How to Update Your Content Architecture to Create Modern Ecommerce Experiences If you’re ready to inject more value into the shopping experience and start exploring new revenue channels today, this three-step approach can help you move off your outdated monolith and toward a flexible MACH experience: 1. Integrate an Agile Headless CMS There’s no room for unplanned downtime in ecommerce. Which is why Contentstack — with the highest enterprise customer satisfaction ratings in the CMS industry — makes it easy to adopt a headless CMS in a matter of weeks, not months. Integrating a headless CMS into your tech stack and migrating over your content marketing campaigns is the first step in decoupling functionality and frontend, so you can immediately get started on new customer-facing experiences with no risk to critical backend systems. 2. Get Started with “Search and Merch” Now that your modern content architecture is in place, you can easily integrate a MACH search and merchandising platform such as Constructor. This combination allows you to create an experience that blends products and content together, uses AI-driven search to deliver the most attractive information on any channel, and guides shoppers through unlimited choice to the items that are perfect for them. “When you look at the search experience of the new world it’s going to look much more like a magazine. You’ll have content embedded together with the product search results.” says Constructor’s Eli Finkelshteyn. Search can be liberated from the traditional grid style and blend results in different combinations, sizes, and formats. “It’s a much more visually pleasing array that also takes into account all of the different things that a company has to offer.” 3. Create Your Rich Content Powering ecommerce with content marketing means teams will need to create a decent amount of high-quality, well-organized content. Developing rich content — which includes videos, infographics, and even other media like podcasts and webinars — gives your consumers more to engage with and your smart content delivery tools more resources with which to create customized experiences. This is why it’s key for brands to choose tools that make it easy for merchandisers, editors, and developers to quickly, and autonomously, create and manage content. Build a Cutting-edge Ecommerce Experience with Constructor + Contentstack’s Omnichannel Content Management System The Gartner Research report on composable commerce states, “Digital commerce platforms are experiencing ongoing modularization in a cloud-native, multi experience world. Application leaders responsible for digital commerce should prepare for a ‘composable’ approach using packaged business capabilities to move toward future-proof digital commerce experiences.” You don’t have to go all-in on an expensive and time-consuming replatforming project to build a cutting-edge ecommerce experience — you can use flexible MACH tools to migrate to composable, omnichannel systems at your own pace. Enjoy a free trial or demo of Contentstack today to see how Contentstack makes this migration achievable and how you can integrate headless content with Constructor’s intelligent search to start building impactful, content-powered ecommerce experiences that will drive greater revenue for your company.

Jul 22, 2021

Project CUE: Developing Headless Content Personalization in 4 Sprints

Personalization has caused a headache for many development teams. Either it’s been approached as a massive project that takes months of work before results can even be tested, or it comes from a third party marketing add-on that slows down delivery and ruins site optimization. To show there’s a better path to personalization for both IT and marketing teams, Contentstack, Uniform, and EPAM teamed up for Project CUE, a proof-of-concept project that revolves around building a personalized itinerary of Las Vegas events. Read below for a look at how a headless content management system (CMS) and Jamstack personalization combine for real-time relevance. Or see how to set up the solution for yourself in the developer workshop on content personalization with Contentstack and Uniform. Project CUE: Inception to Delivery in Less than 12 Weeks With just two weeks of pre-work and four frontend sprints, the EPAM development team was able to launch a functional, personalized travel experience. The Project CUE architecture included: Contentstack - headless content management Uniform - API orchestration and personalization Next.js - frontend framework Google Firebase - development platform and authentication Pre-development In the weeks before frontend development kicked off, the overall user experience (UX) was decided on, wireframes for the homepage were created, web design began, and a foundational content model was mapped out. Agile development of UX, design, and content was continued in parallel to the frontend development sprints. “When people say that you can do agile with a monolith, you can't, it's a true waterfall,” says Tony Mamedbekov, Principal Sales Engineer at Uniform. “Whereas here we actually don't really care what components are being built. We can start setting up personalization intents and signals, we can start creating content modeling in Contentstack, we can create content on top of those content models and everybody can just check in once a week and say how it’s going.” Sprint 1: A functioning website was up in a week, leveraging quick starts from Uniform and Contentstack, with the homepage and checkout in place. Personalization was added to some event pages to start testing how it worked. Sprint 2: We created a booking page that allows you to add multiple guests and shows related suggestions (e.g., a restaurant aligned with your taste that has a table available before the show you’re booking). Login functionality was created using Google Firebase for authentication. Sprint 3: Set up personalized push notifications for the Progressive Web App (PWA), such as a newly listed event or restaurant promotion. A ticker countdown now appears on the homepage after you book an event. Content was fully tagged with relevant signals and intents. Sprint 4: Quality Assurance and User Acceptance Testing (QA/UAT). This level of speed was possible thanks to MACH architecture, a modern approach to development that leverages microservice, API-first, cloud-native, headless tools. Both certified members of the MACH Alliance, Contentstack’s highly flexible CMS and Uniform’s powerful personalization engine let teams quickly reap the rewards of relevant content and scale with ease. Headless Content Modeling Contentstack decouples content storage from presentation, exposing all content via APIs to make it easy to combine content with a personalization engine to deliver relevant content using your choice of frontend framework. Here are some tips for setting up a headless content model for personalization. Craft Once, Use Everywhere Create data fields that store data as clean strings, without any HTML influencing the data that would make it web specific. For instance, the goal of Project CUE was an experience where a visitor could discover and book relevant events around Las Vegas. Each event, such as a show or a restaurant, was stored once in the CMS in neutral JSON so a frontend framework could wrap it in the desired presentation for web and mobile while a voice application could simply read out the string. Everything As a List Instead of the legacy approach of delivering version A, B, and C of a page you can create a repository list of content components and let a personalization engine choose the right component at the right time for each visitor. In Project CUE, lists included event descriptions, restaurant menu options, the best offers for the visitor, and dates available. “Set your architecture up from the start to ‘render one of these components’ instead ‘render this component’, even if it’s just pointing to a list of 1. This makes it nice and easy from a development standpoint to add variants later,” suggests Nick Barron, Director of Partner Enablement at Contentstack. Global Fields A global field is a reusable field (or group of fields) that you can define once and reuse in any content type. For instance, a call to action (CTA) global field could include the description of a Las Vegas event, a picture, and the price. All frontends can use the same global field but each can decide which content to show. The web experience might show all 3 pieces of information, on mobile only the picture is delivered, and a voice assistant would just read out the description. For personalization, having all variants use the same global fields makes maintenance easy (e.g., updating the CTA button color in the global field structure makes the change across every variant). Jamstack Personalization Uniform uses Jamstack, a development architecture based on client-side rendering, to deliver real time personalization at blazing-fast speeds. When combined with a headless CMS, here’s how Uniform accelerates personalization for both marketing and development teams: API Orchestration Uniform enables you to bring multiple integrations into one place with a few clicks. With Uniform, native integration into the CMS allows marketers to manage personalization from within the UI of their existing content management system, like Contentstack. Uniform orchestrates the different data sources to drive personalization, using Jamstack architecture to decouple your different backend services from client-side personalization so you can add - or remove - a data source without having to rewire the whole setup. Intent-Driven Personalization is done by mapping the real-time signals of visitor behavior with intent. Removing the need for rigid personas and complex rules in exchange for intuitive content tagging. “I don’t have to create a rule and say if X then Y. I just say fine dining is an intent, and I map that to related content. When the visitor scores for the fine dining intent they see content related to fine dining throughout the site. You don’t have to set the rule on a certain page, just tag the content once and let it work,” says Uniform’s Tony Mamedbekov. Edge-based With Uniform, personalization happens on a visitor’s device (the edge) without needing to call back to the original data source. This enables real-time personalization, runs without javascript, resulting in an excellent time to first byte (TTFB) and a high Lighthouse score. It also allows for a higher degree of privacy, as an experience can be personalized with information that never leaves the visitor’s own device. Create a Fast POC and Expand Project CUE was able to get up and running quickly with a development team that was new to both systems. “It turns out that it's pretty simple to implement something based on Contentstack,” says Maksym Hordiienko, Software Engineer at EPAM. “We were able to take an example project as a starting ground and add our own functionality to it through highly functional features like webhooks.” “I like the intents in the Uniform platform the most,” says Nataliia Shyriaieva, Frontend Engineer at EPAM. “It looks like magic and allows us to do really powerful things in a very simple way.” With this combination of easy-to-use platforms, teams can start seeing the benefits of personalization quickly and use that momentum to roll it out across the experience. “You can have that personalization up and running in a matter of weeks, not months or years, at a small level as a pilot or proof of concept and then you can keep iterating on top of that,” says Contentstack’s Nick Barron. “You can keep expanding your personalization scope as time goes on in a very agile approach.”

Jun 25, 2021

Podcast: Architecting MACH-Based Personalization (Microservices, API-first, Cloud-native & Headless)

In this podcast, Contentstack founder and CTO Nishant Patel talks to Uniform co-founder Lars Petersen about how even the largest organizations can adopt a modern approach to personalization and free themselves from the burden of third-party plugins and the complicated, expensive processes that come with monolithic suites. They discuss how, despite the fact that the concept of personalization has existed in the business world for upward of 10 years, it’s just now gotten to the point where developers and content professionals can create personalization campaigns that consumers actually want to experience. Why did it take so long to get here? As Nishant and Lars explain, modern personalization is the result of cutting-edge MACH (Microservices, API-first, Cloud-native & Headless) architecture that empowers brands to kill the rules engine and instead act on individual behaviors to create truly unique experiences. More About Our Guest, Lars Lars Petersen is the co-founder of Uniform, which is a member of the MACH Alliance and a Jamstack-based technology platform. It makes personalization agile for even the largest organizations, without ever sacrificing site performance. Connect with Lars, the self-proclaimed “Dane in San Francisco,” on LinkedIn and Twitter. You can also check out his writing in Connect: How to Use Data and Experience Marketing to Create Lifetime Customers. Top Takeaways from Today’s Episode While we certainly recommend catching the full episode, here are some of the highlights of Nishant and Lars’ discussion, from the driving force behind personalized experiences to why the rules engine must die. Slow Suite Systems vs. Modern MACH Stacks Lars: “We see [that] third-party client-side personalization has been more in the lead for the last couple of years. That is definitely changing now that performance is becoming critical, because if you add in anything that is third-party that adds a load to your site, that will impact the Core Web Vitals. “MACH is all focused on, as in its name, microservices, API, cloud, headless. It means that it's multiple headless technologies that are connecting together in a MACH architecture. So that means that the load from the suite approach is gone because you're working with smaller, best-of-breed technologies that [are] really, really good at the specific capabilities—like Contentstack for content management.” Why It’s Time to Kill the Rules Engine Lars: “We wanted to kill the rule engine so badly because the rules engine, where someone is setting up ‘If you mix this and not that and this … and then show this content,’ that is way too complex for many organizations because the dynamics change all the time. “You get new content, you do new a marketing campaign — it's always changing. If you have to update your rules and you have to update them on all the different pages where you have rules, then you basically end up using all your time on creating rules and updating rules. That's not a fun experience going to work every day. … We wanted to kill that.” Content is the Core of the Personalization “Love Story” Lars: “What really makes personalization great is content. Content is the backbone of personalization. Personalization is just about delivering the right content based on different intents. If you really think about it, content is what makes your customers fall in love with your brand. We make that love story happen.” So How Do You Build It? Listen to the episode to hear Lars cover the architectural considerations for implementing a MACH-based personalization engine within a headless ecosystem, why the new personalization technologies are crucial for Core Web Vitals, and how removing roadblocks for both marketers and developers with new personalization tooling is the way to actually get personalization live. Build a Brand Love Story with Contentstack + Uniform As a new member of Contentstack's Catalysts program, Uniform will enable Contentstack users to layer the MACH-first personalization platform on top of their content efforts to create more agile, more scalable, and more unique consumer experiences—all without the pain of re-platforming. Learn more about our exciting partnership here and learn more about how we set up a live personalization project in less than 12 weeks: Introducing Project CUE.

Jun 03, 2021

Reimagining Agile Omnichannel Personalization with Headless Content

At the end of 2019, Gartner predicted “By 2025, 80% of marketers who have invested in personalization will abandon efforts due to lack of ROI, the perils of customer data management or both.”  Of course, since that prediction, a lot has happened. While traditional, rules-based personalization might very well be abandoned in a few years, the rise of intelligent personalization tools is opening the door for a new approach.  “We’re moving into a world where organizations have access to data that allows them to personalize not just the products you’re seeing, but the experience and the timing of conversations a business is having with you,” says Matt Bradbeer, MACH Business Lead at EPAM Systems, Inc. and Co-Founder of The MACH Alliance. “Personalization has moved from a very blunt approach of showing an ad or a product to having a constantly evolving conversation.” Fortunately, the enterprise technology market is adapting quickly to suit the omnichannel customer - and the busy teams that cater to them. To show just how fast companies can get content personalization up and running with modern MACH tools, Contentstack is partnering up with Uniform and EPAM to launch Project CUE - taking personalization from idea to market in less than 12 weeks.  Introducing Project CUE  To showcase the possibilities of a MACH approach to content personalization, Contentstack, Uniform, and EPAM have teamed up for what we are calling “Project CUE”. In less than one business quarter, the team will identify a personalization use case, develop a content model in Contentstack, add personalization with Uniform, and debut the omnichannel experience at the CMSWire DX Summit on July 29th.  “Most of the businesses I talk to are ready for personalization and many have tried it, but the question becomes how to do it successfully,” says Neal Prescott, Vice President of Digital Technology at EPAM. “We don’t see many people wondering if personalization is the right choice, it’s just unclear how to take those first steps.” This is the goal of the project. To show how - and how quickly - personalization can be implemented across the customer journey.  To kick off Project CUE, we spoke with the co-collaborators to understand the historical challenges to personalization, steps teams can take to overcome those barriers, and how a MACH approach (microservice-based, API-first, cloud-native, headless) is helping enterprises reimagine personalization in an omnichannel world.  Shift to Omnichannel Thinking Getting on board with the idea that customer journeys are no longer linear means digital marketers have to let go of old ways of thinking about channels, pages, and content. “One of the things that made personalization so difficult in the past was the traditional presentation or channel-centric approach to content.” explains Peter Fogelsanger, Global Head of Partnerships at Contentstack. “If there was content on one page that would be valuable for personalization on another it was a very cumbersome—and often manual—process to leverage that content elsewhere. Even within the same website.” Marketers stuck with single-use content often leaned towards two extremes when thinking about personalization. Content marketers tried to manually create a new set of assets for every possible interaction a customer could have, which required an overwhelming amount of content and didn’t always make sense from an ROI perspective. While technical marketers narrowed in on the UX aspect to optimize banners and CTAs, which moved the needle on conversions but focused on only a narrow slice of the customer-brand relationship.  'The technology has caught up to where the typical marketer is able to find the sweet spot in the middle,” says Fogelsanger, referring to the rise of headless content.  A headless approach decouples the content from its presentation. So the content on that landing page you just published can be broken up and reused in unlimited ways. Pull the intro copy over into an email, push the header banner to an in-store kiosk, or have the product descriptions read out loud by voice search. No more single-use content. Which means teams can escape the page-based walls of manual personalization and explore a more scalable approach of mapping individual pieces of content to multiple visitor intents.  Stop Proposing on the First Date  Uniform, the headless API-first personalization platform, is helping many customers adopt a mindset of delivering content as a response to customer in-the-moment intent. With personalization tools that adapt content as a visitor explores the site, marketing teams are no longer limited to A/B testing single interactions but can focus on a more long-term relationship with a customer.  'A good way to think about it is a love story, because content is what makes a visitor fall in love with your brand.” says Lars Petersen, Co-Founder of Uniform, “If you go on a blind date and the first thing your date asks is ‘will you marry me?’ that’s going to be awkward. Yet, as consumers, when we come into a website today the first thing we see is ‘buy this’ or ‘get a call from sales today’.”  Like any relationship, the one you have with your customer works best when it’s two-sided.  “Personalization is a feedback loop, it’s not just a one-way channel of showing customers what you think they want to see,” says EPAM’s Matt Bradbeer. “You have to give the customer a way to talk back, to have a conversation, and be able to use that information to change the way you do business.” Bridge the Gap Between Creation and Personalization  Using legacy, manual personalization tools to deliver an experience that reacts to visitor feedback is, for many enterprise companies, not scalable enough to be worth the time and effort.  “You essentially need to have a PhD in creating rules,” says Petersen about the traditional rules-based approach to personalization. “It’s a very technical task.”  For many marketing teams, the tool used to manage content is completely separate from the tool used to personalize it. Often, this means personalization efforts are handed over to someone else or just forgotten entirely.  But today, modern software vendors are designing their products with integration in mind, using standardized API building blocks that make it possible to manage multiple tools in a shared user interface. Sharing a UI makes it easy to weave personalization into the authoring process, so that the people closest to the content can be the ones who put it in context.  “Having an easy to understand personalization layer, like Uniform, and have it exposed right inside the content experience is a real game changer,” says Fogelsanger. “Now you don’t need to think about personalization differently, you’re just describing the content when you’re authoring it.” Take Control of Your Data  Not only do modular tools bring previously siloed applications into the same UI, but they also make it easier to connect the data behind it. “One thing I would say to people on this journey is to get your data into a place where you control it and own the structure, and it’s aggregated, agnostic and available to any system or touchpoint you want, when you want it,” says Bradbeer. Building this type of fully composable, data-agnostic architecture doesn’t happen overnight and a major benefit of MACH tools is that companies can start benefiting from this approach without waiting around for a big-bang replatform. “You can put a MACH frontend or backend on what you’ve got, which allows you to get some immediate benefits while chipping away at the larger architecture,” says Prescott. “You can get going where you can and where you’ll see the biggest impact, while also planning for the future.” This lowers the barrier of entry for personalization. Companies can gain momentum in one area, such as email nurture or the promotion of an annual event, and be able to reuse that data and scale efforts to the rest of the experience over time.  We’ll Show You How “Project CUE is about showing omnichannel experience can be fast time to market,” says Petersen. “That it can meet the requirements of fast performance based on the core web vitals, and the experience the visitor gets across the different channels being highly relevant to their intents.” “The overall theme of Project CUE is that you can start now.” says Fogelsanger, “There are things that you intuitively know about your customers and your business that can be a starting point.”  “I think there are a lot of brands interested in taking this approach, they’re just not sure how to do it or not sure if they can do it in a way that’s not too disruptive to their content machine.” adds Fogelsanger. “My hope is that we can show there is a lower impact way of doing things a lot smarter, that you can weave personalization into the way that you’re managing your content experiences today.”  Project CUE kicked off this May, so stay tuned for updates and insights.

Nov 05, 2020

Survey: The Pains and Priorities of Digital Experience for IT Leaders in 2020

We recently conducted a survey of 100 business, technology, and marketing leaders at UK enterprises to learn more about how well current digital experience investment aligns with digital transformation ambitions. Overall, we found out that there is a lot of room for improvement. Only 27% of respondents said their current tools were adequate to meet their ambitions over the next 18 months. Furthermore, companies are wasting a quarter of their digital experience budget on capabilities they don’t use. Nearly all (98%) of these organizations have some sort of digital transformation project underway, and technology executives are at the helm of most of these efforts. These survey results give valuable insight into the pain points these digital leaders are experiencing with their current investment, how they plan on moving forward, and the slight differences in their transformation ambitions compared to their business and marketing counterparts. The Role of IT is Getting Wider As digital expands its reach across the enterprise, it blurs the roles and responsibilities within the C-suite. IT leaders see an increase in project requests — a 40% increase in 2020 according to a MuleSoft report published at the start of the year — and an increase in the work required to get the company on board with new changes. This year has made it clear just how critical digital business is. IT has seen an increase in focus and funding due to the pandemic, with 38% of IT decision makers saying it has helped improve the understanding of the department and 30% saying they now have more control over business decisions. This increased authority, combined with their wide-angle view of the company’s digital architecture, sets the stage for IT leaders to make significant shifts in an organization’s digital maturity. Today’s Investment Pains Technology, business, and marketing leaders agree about the current state of their digital experience investment. All three groups report that a quarter of their investment is inadequate for today’s ambitions, and roughly half of the features and capabilities in that investment go unused. However, there are differences in opinions on future technology choices. Over the next 18 months, 52% of business leaders expect to invest in new technologies compared to 36% of IT leaders. In the same time period, marketing and business leaders expect current investment to become more inadequate while IT leaders don’t expect further decay. This could be attributed to IT leaders having a better overview of the current toolkit. There may be tools already purchased, not yet implemented, that are capable of meeting future needs. With 74% of IT leaders listing “lack of integrations with other technologies” as a top factor contributing to technology inadequacy, they may be aware of upcoming integration plans that will make specific tools more usable. So while some investments will decay, other investments will become more adequate over time. On the other hand, the difference could indicate that business and marketing leaders hear more about the capability performance from the user’s perspective. The backend of a solution might be smoothly implemented, and theoretically, it should be able to do what you want. Still, if users find it frustrating, it won’t be seen as an adequate technology. Both groups were more likely than their IT counterpart to say that the high cost of implementation and low employee adoption were barriers responsible for unused digital experience capabilities. IT leaders will need to address departmental barriers and differences in expectations to ensure future technologies have the best chance of success. New Technology Priorities When asked about their top three evaluation criteria for new technologies, IT leaders were most likely to say the total cost of ownership (62%), ease of integration (56%), and optimizing current processes (50%). They were least likely to be looking to new technologies to open new revenue streams. With half of CTOs stating that part of their role is dedicated to the modernization of core technology infrastructure, it makes sense that the top priorities would revolve around fine-tuning business processes, budgets, and the backend.  How 2020 has Shifted Priorities It’s estimated that the impact of COVID-19 has accelerated companies’ digital communication strategies by a global average of 6 years. An IBM survey found that 66% of executives have completed initiatives this year that previously encountered resistance. Traditional and perceived barriers like technology immaturity and employee opposition to change have fallen away. When asked how COVID-19 has impacted their technology evaluation criteria, each criterion was more important by at least half of technology leaders. Remote work support and ease of employee adoption saw the most impact, with 83% and 71% saying they were more critical, respectively. IT leaders have had to take on the monumental task of figuring out how to keep business running with a suddenly remote workforce, so it’s no surprise that their top priorities focus on ensuring tools are usable. With over half of employees wanting to continue working remotely after the pandemic, the technology decisions an organization makes now are laying the foundation for the long-term. Steps to a Successful IT-Led Digital Transformation In a recent ebook about modern enterprise architecture, we interviewed digital directors who are currently leading their organizations through major transformation efforts. They shared some of the key lessons they’ve learned along the way to company-wide change. Align the Brand Early When Tom Morgan, Director of Digital at The Spectator, was tasked with modernizing the British media company’s architecture, he said the first step was to understand what the 200-year-old brand meant to their readers. “In order to have a chance of changing people’s perceptions around technology, I really needed to be able to speak the brand language. That was a case of spending immersive time with our product, but also spending immersive time with our customers. Above all the other change mechanisms we did, all the prototypes and all the bulldozing, that was probably the most powerful one. It means that now, when I sit with other stakeholders and talk about decisions from a technology point of view, I can talk with a position of authority about what that means to our readers and what that means to our customers, but also what it means to our legacy and our institution.” Departmental Projects to Multidisciplinary Products Whereas projects have clear end dates and focus on concrete deliverables, products have evolving roadmaps and focus on delivering functionality and measurable value add. This sets up a space for cross-departmental teams to collaborate on the full product lifecycle — planning, design, implementation, launch, value measurement, improvement, maintenance, and retirement. Approaching transformation with a cross-section of departments helps each stage go smoother, says the Director of Digital at an iconic British luxury fashion house. “There will always be mini compromises along the way, but you can have a lot of unity from the very beginning in the choices you make. That sets you up well for the phase where you deliver.” New Types of Vendor Partnerships While IT respondents were not likely to place importance on vendor support, 39% of business leaders felt it was a top reason current investment will become inadequate. Marketing leaders ranked it as the second most important evaluation criteria for new technology. Marketing and business teams who are eager to employ modern software capabilities are looking to partner with vendors to pilot new types of digital experiences. While IT departments may have a more realistic view of how mature the architecture is and feel the company is not yet ready for these initiatives. However, for enterprises that are ready to modernize their architecture, a vendor relationship is fruitful. In interviews about their move to MACH architecture (Microservices, API-first, Cloud-native, Headless), digital leaders discussed this partnership’s importance. “We’re going to be the first UK publisher to be doing headless at this scale. You can’t do that without an understanding from your solution partner that it’s more than just a software-as-service relationship. It’s a deeper part of the journey.” -Tom Morgan, Director of Digital, The Spectator Strong Collaboration For A Smooth Journey Companies that aim to be digital leaders will have to take some paths that don’t have clear roadmaps. When companies stall in their digital progress, the cause is more likely to be misaligned culture (18%) or insufficient commitment across the organization (14%) than significant disruptions to the market (6%), according to a study by McKinsey & Company. The most commonly cited reason for those who avoided digital derailment was “strong alignment and commitment across the organization.” Clear communication among the C-suite, departments, and external vendors can match priorities with possibilities and create the smoothest journey to digital transformation. See the full results of the UK enterprise leader survey: The State of Digital Experience Investment in the United Kingdom Learn more about how Digital Directors are modernizing business: Break the Replatforming Cycle with MACH Enterprise Architecture

Nov 04, 2020

Project Spyglass: 4 Weeks to a Working Augmented Reality Prototype

Contentstack and Valtech built a working Augmented Reality showcase in less than 4 weeks of development. Here you’ll find the resources to the full story: how we built it, what we’ve learned, and of course, how you can try it for yourself. The brief was to build a production-ready (or as close as possible), open-sourced, mobile-web-based (for maximum accessibility) augmented reality demo that leveraged Contentstack’s content experience platform and headless CMS. Spoiler alert: We did it. Here’s another spoiler: You can view the demo video below, and try it for yourself here: spyglass.valtech.engineering Read the full story of how we built Project Spyglass at the links below: Week Zero: From brief to concept Week One: From concept to game plan Week Two: Content modeling and interaction building Week Three: Integration! See the evolution of the project in th video: Check out the full video playlist here What Did We Learn? 1. Start with “Why?” Augmented reality applications should not just be built out because they are “cool”. Any new customer-facing touchpoint needs to respond to user needs. We learned that AR is useful for distilling large volumes of potentially complex information, and coupled with personalization technology, can provide a customized retail experience that helps filter through potentially confusing purchasing options. This was our starting point and our anchor throughout the entire project. 2. Use tools that simplify Technology can help. After extensive research on Augmented Reality frameworks and methodologies, we selected the following technology stack: A-frame, which is a way to describe a 3D scene in an HTML-like markup AR.js, which layers that 3D scene on top of camera pixels Parcel, a low-configuration bundling tool Contentstack, to provide the content to the application in real time via its headless GraphQL API. 3. Work in parallel A very small team (4 people) built this demo in 4 weeks. The trick was having all parts of the project being developed simultaneously, and iterating throughout the course of the project.  Content: Mapping content properly is essential, so start early and continue to iterate. Design: Keeping with our “why”, the goal was making the application easy to interact with, so our designer iterated on sketches and concepts and designs right until the very last day. Engineering: Work out individual scenarios, then put them all together later. 4. MACH is Essential Having an API-first headless CMS made it possible to quickly upload content and pull it into the application. MACH (microservices, API-first, cloud-native and headless) technologies make it quick and easy to build out new touchpoints, whereas with a more traditional system it would have taken much longer to create an environment where this innovation could be built. Want to Learn More? Watch lead architect Jason Alderman and Contentstack Head of Product Marketing Sonja Kotrotsos delve deeper into all the lessons that we learned from building Project Spyglass in this session from TNW Conference.

Oct 26, 2020

How to Choose the Best Partner to Transition Your Technology to MACH

While many digital transformation projects can be made more successful with the support of a qualified implementation partner, it will be almost essential for businesses that want to pursue a MACH technology architecture. In this article, we pull from a history of MACH implementations shared by Contentstack and Valtech to share the top evaluation criteria that businesses must consider when selecting a MACH implementation partner. What is MACH? MACH is a new breed of technology built on four key principles: It’s microservices-based, API-first, cloud-native, and headless. This kind of technology architecture enables businesses to build ever-evolving digital experiences with tools that are always modular, pluggable, and scalable. Why Go MACH? MACH technology allows you to create an enterprise technology stack made up of exactly the tools you need, when you need them — and replace them once they no longer meet your needs. Microservices help cut development lead times by 75%. MACH technology is, by definition, easily extendable. That means, with MACH, your business can use robust APIs to quickly integrate all of your technologies with any new tools (like personalization engines) or new channels where your customers might be found (like augmented reality applications).And because cloud-based MACH Is always up-to-date, it never requires costly, time-sucking upgrades; ultimately reducing your total cost of ownership. In fact, cloud deployments deliver 3.2X the ROI of on-premise ones. Why It’s Important to Have a Partner for Your MACH Transformation Moving to MACH architecture requires specific expertise both for the initial shift as well as the ongoing, business-wide transformation that switching to MACH can incite. For this reason, it makes sense for businesses to partner with professional services providers that have MACH implementation and operation experience. Not only will the best MACH partner have the right staff on-site (or the ability to source the talent needed) to get your implementation off the ground, but they’ll also be able to help design and implement a solid foundation for your business and provide ongoing guidance for your team. Tips for Choosing the Right Technology Partner for Your MACH Implementation Remember these 6 tips when you’re evaluating for the best technology partner to implement MACH within your organization. Look for Outcomes, Not Time Spent First things first, you want a MACH implementation partner that prioritizes a results-focused engagement — not just a package of hours for a one-dimensional technology implementation and an ongoing support agreement. To find out how likely a potential partner is to deliver actionable outcomes for you and your customers, you can start by taking a look at their past MACH technology implementations. First, ask for case studies, which will give you an introduction into their track record for producing results. Consider contacting past clients and finding reviews across blogs, review platforms, and even social media. Pursue Ancillary Skills That Power the Entire Engagement Beyond the technical skills that go into a MACH implementation, there are ancillary skills a partner should also have to make your entire transformation a success. “In my opinion, one of Valtech’s strengths is both the technical delivery of the services related directly to Contentstack, but also what I call the ‘soft skills’ critical to the overall engagement: Customer experience strategy, data science, content, etc.,” said Peter Fogelsanger, Contentstack’s Global Head of Partnerships. “When you’re moving to MACH there’s a lot more to it than just replacing your CMS. It needs to be strategic and it needs to be agile and it needs to fit with the rest of the stack. “It’s like building a house. You can learn how to do it yourself, but it’s nice to have a partner who can actually guide you through things like getting your internal teams spun up to support MACH.” When transitioning to MACH, it’s best to do it alongside an implementation specialist that can support your digital transition every step of the way, even through the less technical tasks. Find Cultural Alignment Working with an implementation partner that aligns with your culture should be a priority when it comes time to transition to MACH. For example, Contentstack and Valtech’s MACH implementation projects have always gone over so well because both organizations are culturally aligned — both are members of the MACH Alliance, both feel passionate about helping enterprises achieve a MACH future, and the end-user experience is a top priority for both businesses. “One of the things that has made our partnership strong is that both organizations are customer experience-oriented,” said Matthew Morey, Senior Vice President Of Technology at Valtech. “It may sound cliché, but a lot of service providers are only billable hours-focused and many software companies are only results-focused. We’re both interested in the end-user achieving a desired KPI or improving their ability to do a specific task.” When you’re aligned with your professional services provider, you’re more likely to be on the same page when it comes to setting and achieving goals throughout your MACH transformation. That’s a recipe for a happy team and, eventually, happy customers. Engage in a Paid Proof-of-Concept Project Boilerplate demos of individual MACH solutions won’t give you the full picture of how an organization-wide implementation will go for your business. And simply talking to an agency or watching their pitch won’t be the same as actually working with them.Consider instead embarking on a small but paid proof-of-concept project when choosing a MACH implementation partner. This monetary investment means your own stakeholders will be more engaged and will give you a more accurate idea of what it’s like to work with your implementation partner of choice when real results (and funds) are on the line. Your investment in a paid proof-of-concept project doesn’t necessarily have to be a loss. If chosen well — and your partner can help with this! — you may end up with a useful chunk of code or even a start to a product that you can develop for customer use. Another reason to invest in a paid proof-of-concept project is that you’re likely to work with the team with which you’ll be paired when it comes to instrumenting a full MACH implementation — giving you another chance to experience how your cultures align. Luckily, with MACH, building out a paid proof-of-concept project doesn’t have to be an arduous process. Thanks to the ease of composability of MACH technology, projects can be spun up in a matter of weeks — not quarters — when teams are aligned on goals and priorities. On-premise, monolithic systems don’t give you this opportunity the way cloud-based, modern solutions do. Take advantage of that. Seek Out the Boundary Pushers The Valtech + Contentstack partnership works because both parties are boundary pushers. Valtech is always excited to implement emerging experiences using Contentstack’s platform — which is pushing boundaries in its own right in the world of headless content management. Recently, we worked together to take something for which there was no blueprint (a content-rich augmented reality experience) and push the boundaries to charter our own. The augmented reality demo that we built in under four weeks — Project Spyglass — was only possible because of MACH principles and because both partners were willing to challenge the status quo. Partner on Prioritizing Innovation As parting wisdom often is, our final bit of advice is a bit philosophical. In the words of Pascal Lagarde, Valtech’s Vice President of Commerce in Europe, “ … don’t focus on features. Focus on vision; direction.” Instead of honing in on the specific features or solutions a potential partner is currently working with, look at whether they have the same overall philosophical vision as your organization. MACH technology is ever-evolving and ever-improving. Businesses and implementation specialists that don’t have a philosophy that prioritizes innovation will never make it in a MACH future. You must make sure you’re on the same page as your implementation partner about the fact that composable architecture is the future of enterprise technology. And you must be sure they’re willing to go on a journey of continuous innovation with you, not just sell you a one-time solution. Get the Guide from Valtech and Contentstack Learn more about MACH technology implementation and how to make it work for your business in our new in-depth ebook from Valtech and Contentstack: Break the Replatform Cycle With MACH Architecture

Oct 22, 2020

Why 51% of Digital Experience Capabilities Sit on the Shelf

Enterprise companies only use half of the capabilities of the digital experience technologies they invest in, according to our recent survey of 100 business, technology, and marketing leaders in UK enterprises. Those features aren’t just taking up space, but significant costs. Respondents estimate that, on average, maintaining and licensing these unused features accounts for 24% of their current digital experience investment. While our study focused on businesses in the United Kingdom, unused software is hardly a regional issue. A 2016 study estimates that there is $259 worth of unnecessary, unwanted software on each computer in every office worldwide. Features aren’t going unused because of a lack of need. Over half of our survey respondents (52%) said that they expected substantial investment in new technologies to meet their digital transformation ambitions in the next 18 months. Worldwide spending on digital transformation is expected to reach $2.3 Trillion in 2023; if companies are allocating a quarter to unused features, that means billions of dollars on unused capabilities. While we were surprised to learn that such a large percentage of digital experience investment was wasted, we did expect that very few companies were exploiting their full technology stack. We were more curious about why these tools aren’t implemented and how companies evaluate new tools to prevent unused investment. Today’s Roadblock: Expensive Implementation The high cost of implementation was the most significant pain point, with 43% of respondents listing it as a barrier responsible for unused digital experience capabilities. For many companies, a rapid transformation has led to a web of custom integrations made on an as-needed basis. Implementing new technology doesn’t just mean wiring it into the system, but detangling the spaghetti of integrations currently there. A tedious process at best, and in some cases, it turns into a scavenger hunt for information on connectors built by someone no longer at the company. Tellingly, 58% of the respondents (and 74% of technology decision-makers) said that a lack of integration ability was a top reason their current investment would become inadequate over the next 18 months. The integration headache doesn’t just slow down transformation, it can stop it. In an interview about the company’s decision to modernize its architecture, The Spectator’s Director of Digital said, “In terms of ability to innovate, everything had a cost associated with it, which put us off doing anything risky. That meant our technology was stagnating — and so was our ability to serve customers.” If stitching a new technology into the stack is the largest barrier today, when application integrations rates are just 31% in the US and 26% in the UK, companies aiming for a more connected experience need an integration makeover — or risk getting locked into a legacy knot of dependencies. Tomorrow’s Road: Composable Architecture While it would be great to stop time and tidy up your toolkit all at once, businesses don’t have that luxury. “Companies are modernizing their approach to digital in stages.” explains Neha Sampat, CEO of Contenstack, “They need to be able to access new technologies and tools now while transitioning their stack over time. Modern software needs to integrate not only with other new technologies but also with legacy tools to make the wider digital transformation as smooth as possible.” An increasing amount of software providers are coming around to this way of thinking. The digital experience is now too extensive for any one platform to handle, and modern vendors understand that the most competitive tools are the ones that excel in their specific area — and play nicely with everyone else. Recently launched, the MACH Alliance is a growing group of enterprise vendors and system integrators that believe this type of composable architecture will power the next generation of business and technology. With Microservices-based, API-first, Cloud-native, and Headless solutions, the Alliance is helping enterprises embrace the paradigm shift from legacy platforms to an open technology ecosystem that’s designed to evolve. Today’s Roadblock: Feature Bloat Feature overlap between tools was said to be a reason for unused investment by 38% of respondents. These companies may have started on a one-size-fits-all software suite for their digital experience, but as their ambitions became more unique, they turned to more modern tools. For instance, a company’s main site might run on the legacy platform. However, a headless commerce system handles transactions, another vendor optimizes on-site search, and there’s a custom-built solution for the mobile experience. Because the original vendor suite was designed to be a “one-stop-shop” for digital, adding a new tool often requires custom workarounds to integrate — making the original platform very sticky. A critical part of the business may depend on only 10% of the legacy platform, but the licensing fee still requires payment for 100% of the suite. Business, technology, and marketing decision-makers felt similarly about feature overlap, with 35%, 41%, and 36%, respectively, listing it as a barrier. However, they differed when it came to feature necessity. Technology leaders ranked “they are features and capabilities we do not need” as the most common barrier (44%) while business and marketing both ranked it as the lowest (19% and 15%, respectively). Hinting at a disconnect between the groups in regards to digital ambition and technology capabilities. Overall, whether due to duplicate features or unnecessary capabilities, it’s clear that many enterprise businesses are finding their modern ambitions are stuck on legacy feature management practices. Tomorrow’s Road: Modular Tools As mentioned above, enterprise technology is moving away from the classic, single-vendor suite to a modular solution ecosystem. In a recent interview with Matthew Baier, MACH Alliance board member, he spoke about the freedom offered by composable tools: “MACH is as revolutionary as the “undo” button. You can make decisions that don’t punish you for years to come. You can pick a piece of technology that you’re unsure about, and instead of committing to it for the next ten years, you can test it out. If it works as expected, it’s already there and integrated. And if it doesn’t, you can remove it from the stack without everything falling apart.” Technology is one half of the equation for useful features; the other is market awareness. When discussing the best ways to evaluate new technologies, the Director of Digital of an iconic British luxury fashion house advised: “The best question you can ever ask in an evaluation is if they can give you a specific example of where they deployed something recently that came from a customer idea. For the platforms we chose, within seconds, a person in the room could tell us one or two very good recent features they built because a customer suggested it. That is really authentic insight showing that they are listening to and partnering with their customers.” Selecting modular tools with customer-driven feature development helps businesses quickly access the capabilities they need and quickly remove the ones they don’t. Today’s Roadblock: Maintenance Burden The high cost of consistently maintaining capabilities was listed by 38% of respondents as a barrier to use, and 36% said that the time spent maintaining their current digital experience investment is disproportionate to its business value. According to Devada’s “2019 State of the Developer Report,” 66% of developers find that maintenance of legacy systems and technical debt hinders productivity. So it’s no surprise that 64% of businesses list the need to upgrade outdated infrastructure as the top reason for increasing their IT budgets in 2020. Of course, maintenance is not only a burden on the backend. Both business and marketing decision-makers were more likely to list maintenance as a barrier than their technology counterparts (45%, 36%, 32%, respectively). The sales promise of a “data-driven” experience can, in specific platform promotions, conveniently leave out that delivering on that promise requires the user to enter and update massive amounts of data manually. On average, an employee uses eight SaaS applications to do their job, and companies with over 1000+ employees have over 200 SaaS applications in their stack. If these platforms don’t automatically sync, the time required to copy data between them means it’s simply not feasible to keep them all up to date. This means that many data-centric features, such as personalization and analytics capabilities, remain unused. Tomorrow’s Road: (Really) Try Before You Buy If a software vendor claims to have a flexible solution that helps businesses move quickly, it should be no problem for enterprises to take a test run of the software beyond the standard sales demo. “I always recommend to any other company going on this journey to hack your way past the sales deck.” says the Director of Digital at a British luxury fashion house, “There were three key selection decisions that were thrown completely on their head by doing a small, one-day hackathon. We were able to evaluate bottom-up, with facts and evidence, on business cases signed off on by the CIOs. Effectively, decisions were changed because we spent that little bit of time proving things worked.” These real-world trial runs shouldn’t be limited to developers. Having cross-departmental teams evaluate solutions means that red flags are identified early on in the selection process. Unifying on core selection criteria from the start can help teams agree on mini compromises along the way and prevent anyone from getting locked into effort-heavy tools they never had the chance to veto. Pave the Way Now for Continued Digital Acceleration Companies have felt the pressure to transform for quite some time. In 2020 the digital experience is being forced to change even faster. It’s estimated that complex business challenges companies are facing due to Covid-19 has accelerated companies’ digital communication strategy by a global average of 6 years. Only 27% of our survey respondents felt that their current technology could adequately support their digital transformation ambitions over the next 18 months. One in ten believe that their existing tools are an obstacle to their goals. With ambitious transformation goals and urgent digital needs, many companies are renovating and expanding their architecture. These improvements can incorporate modern, modular tools to ensure companies unlock themselves from the current barriers of legacy technology and create a solution ecosystem that will be easy to evolve with the enterprise’s digital ambitions. See the full results of the UK enterprise leader survey: The State of Digital Experience Investment in the United Kingdom

Oct 15, 2020

Building a Working Retail Augmented Reality Prototype: Final Week of Development

When it came to building a mobile-web-browser Augmented Reality proof of concept in less than four weeks, we knew two things: We knew we would run into a whole bunch of unexpected challenges. And we knew it would get done. Software development in a time crunch is almost guaranteed to come with eleventh-hour surprises, late nights, and frazzled nerves. And that’s pretty much exactly what happened. Luckily, the Valtech and Contentstack teams building our Augmented Reality demo do not rattle easily. The team had come up with a concept for a content-rich AR showcase in less than one week; the week after that, became subject-matter experts on the topics of beauty and skincare, and determined the must-haves for a usable and working AR POC, and finally had designed and developed live interactions in AR, including how the data would be structured and pulled from the CMS, last week. Here’s how it all came together in week three — as we raced to build our working Augmented Reality prototype that would help a retail beauty (skincare) buyer navigate supermarket shelves and browse through a brand’s products to receive a personalized recommendation; take a product home and get onboarded to using it; and finally get recommendations for repurchasing, changing usage, and/or leaving a review. And spoiler alert: yes, we did all of that, and yes, you can try out the AR app for yourself. Integration, integration, integration “If this were Sesame Street, the word of the week would be: Integration!” quipped Danielle, our project manager, at the start of week three. Week two had consisted of building all the individual ‘parts’ of the application in small ‘samples’ — little scenarios that could, in parallel, all be shown to work. This included things like: designing the scenarios in 2D; displaying the scenarios in AR to look like they did in 2D; programming the app to recognize the bottle moving as a controller for making the experience change in AR; pulling data from Contentstack, and so on. “Integration!” meant actually combining all of it together — is it any surprise that we were expecting to run into some weirdness? Design: Fusing brand with functionality The plan all along was to show the app working with three “generic serums”: To simulate one beauty brand’s different serum product offerings, and thus illustrate how a customer could browse between them using the AR experience in-store. The thing is, Svante, our designer, had been working on beautiful serum labels, while Alex, our developer, had been figuring out how to make the information we needed display (and persist) in AR using clear markers. (More on that in our week two post.) Since we chose to work with fiducial markers, which are essentially big black boxes with asymmetrical shapes or content inside, Svante’s task became to fuse — or integrate — these marker “boxes” with a custom, and beautiful skincare-like, label in the beginning of the week. Needless to say he was up to the challenge, and he created three beautiful label designs that worked seamlessly in the AR app. So people could access the AR experience, and it looked like a real beauty product. Check! Print these labels at home to try out our AR app. The rest of the week was a game of expectation vs. reality for design. Prior to this week, Svante had only been designing in 2D, so once the designs went “live” into the 3D AR experience, Svante needed to make quick adjustments on the fly so things could look and work the way they were supposed to. For example, we had originally planned to have a “recommended” ribbon in Scenario 1 which appears wrapped around the bottle that the app recommends for the user accessing the experience. It turned out that it’s pretty tricky to wrap a 2D object around a 3D one, so our wrapped ribbon tails turned into more of a crown. Development: Will it run? A-frame skeletons The week started with Alex building out the A-frame skeletons for the AR experience. Danielle explains: “For rendering content in 3D space, we used A-frame and AR.js libraries (see the research on the different frameworks we considered here). AR.js is the Augmented Reality component — it makes use of the camera to do computer vision, recognizes markers, and places content on top of the real world. A-frame allows us to describe a 3D scene with HTML-like components. Essentially, you can tell the experience, “there’s going to be text here, and a graphical element there, and something else here”. A-frame can also be used to define gestures, like the rotation and tilt of the bottle. It’s a higher-level programming language than if you were to go straight into WebGL and try to define all these components and sections. So before we could actually start working with the content from Contentstack and the visuals and assets from Svante, we had to actually lay out the skeleton, or template, for where all those pieces would go.” Integrating HTML elements into the AR experience On the HTML side of things, we added some cool graphical HTML elements to the experience, especially noticeable in Scenario 1, where a pop-up in the lower third of the screen gives a fuller context to the shopping experience. It can tell you which part of the “skincare routine” you are shopping and lets you save selections. Here is where we started to hit some road bumps. Blending the AR.js together with the HTML elements turned out to be trickier than we expected. We discovered that AR.js, as written, adds A-frame elements to the document body, and then sets the size of the body to the dimensions of the web cam, which makes it tricky to integrate properly positioned HTML elements atop an AR scenario. It wasn’t planned, but the team ended up forking the AR.js code and making a local branch that fixed this issue, so that we could render 2D HTML elements and our 3D AR content as expected. Note: We love and are fully committed to the open-source community and its practices, and as a next step, we plan to commit these changes as a pull request back to the AR.js library.  Another thing that we ran into was some issues with tapping gestures, because raycasting (which is how you can click on objects in a 3D scene) was not working properly. This was due to some customizations in the AR.js “scene camera” setup (the view into the 3D world). Once again, we knew how we could fix it, but we didn’t have time in this final week of development. Luckily, the team came up with the idea of using a “swipe” gesture instead, and this worked really reliably and felt natural in use. Note 2: These kinds of issues could also be resolved by working with “native” AR tools, i.e. ones that leverage Android and iOS AR frameworks. We chose to use a mobile-web-browser-based experience to make user access as seamless as possible. We knew this would come with tradeoffs, and these are just a couple of examples. Building mobile-web AR experiences is still a little bit of a “wild west”, and we are all still learning about how to make them better. Text height An unfortunate reality of working with 3D graphics is that unlike 2D browser renderers, A-frame (and a lot of other 3D libraries) don’t automatically figure out dimensions of elements and make them flow and stack within a document-object model. Which was of course an issue when getting text from Contentstack that might be dynamic, changing, or simply not come with a known “text height”. This was a time constraint that drove us toward the decision to hardcode the text layouts in an effort to complete the work in time for the end of week three. However, this issue has since been addressed and corrected through a few sneaky post-week-three hours. Parcel.js issues We also had some weird, unexpected bugs with our build tool Parcel.js, such as it being unable to use our custom fonts (fixed through hosting the font files on Contentstack assets), and also referencing HTML files incorrectly (addressed by debugging the command-line parameters for Parcel to make sure the paths in the built files were generated correctly). We figured it out, but it was another eleventh-hour surprise... exactly the kind that is relatively expected in fast-clip software development! Reusing markers When it came to actually using the application, we wanted to put everything into a single HTML file, accessible from one button, so that we would only have to ask for permissions to use the camera and the sensors once, and leverage a lot of the same elements. The problem was that re-using a marker and associating it with different 3D content for different scenarios was really tricky and created conflicts. This would have required a lot of code to make work that we didn’t have time for, so for now, each of the three scenarios are separate documents and require unique permissions — though they can all be accessed through in-experience buttons once you’ve launched a scenario. And finally... We did it! The team pulled together and with the help of Ben & Gal at Contentstack, Alex, Jason, and Svante pulling a few final late nights, and everyone else cheering them on with fingers crossed... we built the thing. It’s working and it’s live. Check out the demo video below. And of course, try it for yourself: Go to spyglass.valtech.engineering, print the labels, and see the magic happen! Stay tuned for our lessons learned summary with lots more details on how we built this Augmented Reality demo, what enterprises need to know about building out AR experiences, and why we believe there’s endless opportunity to explore emerging technologies with MACH (microservices, API-first, cloud-native, and headless) architecture.

Sep 17, 2020

Development of an Augmented Reality Retail Skin Care POC: Content Modeling and Interaction Building (Week 2/3)

Week Two: Content modeling for AR; final designs; selecting and programming marker tracking movement patterns and text display parameters. Welcome to the reality of building an augmented reality demo. This is the second-to-last week of our "live" project documentation (find week zero here, and week one here), and this week we moved away from designs and theory and into hands-on development on all fronts of this project. In this week’s post, you can read about: Content modeling for AR experiences Interaction design on top of the real world Developing the AR content display and marker tracking interaction As a summary, we have decided to build the following: A mobile web-browser Augmented Reality (AR) experience to be used with a brand’s skin care products -- for the purposes of this POC, we are focusing on the skin care category of serums. It will help the customer to select the best serum for them in the store; to receive onboarding instructions and personalized recommendations when first using it; and after using it for a while, receive updated recommendations and information. First up this week: how to actually get all this information into our AR experience. Headless CMS content modeling for Augmented Reality In order to provide a content-rich AR experience to our users, a lot of data (brand and product names; product textures; ingredients’ purpose, source, contraindications; usage instructions) must be stored in our CMS (Contentstack) to be easy to query (so it shows up the way we want, at the speed we need, and prepared for personalization), and easy to edit or modify (because products get added; names change; instructions get updated; new ingredient configurations and contraindications happen). The process of documenting all the types of content you’ll need for an experience (whether AR, VR, mobile app or website) and putting it into logical buckets to ensure your CMS is effectively configured for editing and delivering that content to that experience (or many experiences) is called content modeling. (Here’s a primer we’ve written on this topic.) With traditional content management systems, which have been designed for building web pages, this is a pretty straightforward process. You basically have a few ways you can organize things: folder structure can reflect your site pages, or it can reflect content types (elements of a webpage like banners, images, forms, text; repeating formats like blog articles, press releases, customer testimonials, and so on). Then it’s just a matter of giving editors page templates that allow them to mix and match these content types within certain identifiable limits. Or in some cases, the CMS even comes with static templates that can’t be customized or made more flexible at all. This is based on the assumption that because there are only a few, relatively predictable ways that this content is going to be used for all customers of that CMS, that it’s easier for everyone to pre-define the content models. When it comes to headless systems, though, things are a little bit more fluid. Especially for a CMS like Contenstack that was designed to be as un-opinionated as possible about where that content is going to end up. While you can have (and we do provide) lots of solid guidance on specific examples for different industries and use cases, at the end of the day, your content model is going to be hyper-unique to your organizations’ ways of working and ways of delivering your content. As it turns out, this is actually a good thing when it comes to building out Augmented Reality content models. Benefits of a headless system for Augmented Reality Ben Ellsworth, Solutions Architect at Contentstack, says that headless CMS is somewhat of a no-brainer for developing AR experiences precisely because of its flexibility, or lack of opinion about where your content is going to go. He explains:  "There isn’t a long-standing tradition of AR and VR applications, and there’s no solution that is pre-built for the problems that an enterprise is going to experience when they’re developing for AR. When you’re trying to do something uncharted, you cannot let yourself be limited by something that was built with “websites” in mind. Contentstack is extremely agnostic to the display and dynamic in the way it relates content to the display layer, so that you can architect the data and the content structure in the best way for where it’s going, no matter what the end goal is.” “You’re only constrained by the limits of today’s technology,” adds Gal Oppenheimer, Manager, Solutions Architects at Contentstack. “So, in the case of AR: what can the phone browser do, and what can the cameras do? Those are actually our constraints, because that’s where we’re pushing the boundaries in terms of what technology allows us to do today.”  Content modeling: Identifying, classifying and uploading content What did content modeling for our AR experience actually look like? Step 1: What content is there?  First, we had to figure out all the different kinds of content that it might want to use. To do that, we had to research some serums so we could know what kind of information exists about them. We found this site particularly useful for discovering the purposes of product ingredients. Step 2: Extrapolating - what are the content types that we might need? In this step, we listed every kind of content that we could identify about skin care products that might be relevant to our purposes. We laid this out in a document with hypotheses for the way that we could structure these in the CMS (text, group, reference, etc.) The Contentstack team consulted with the Valtech team on how best to structure this content in the most useful way. Sidebar: Flexibility vs Ease of use The biggest question that comes up when designing content models in headless CMS is whether for a given scenario, more flexibility would be better, or whether some rigidity would actually better serve the end users (editors). Ben explains: "There is a point of diminishing returns where additional flexibility ends up being detrimental to productivity. When a content creator has access to 1,000 options for structuring a piece of content, they have to make 1,000 decisions every time they create a piece. This is an extreme example but with a headless content management system, the person modeling the content does have the power to create an infinitely flexible system. “As you model your content, ask yourself why you’re giving the editor the options you are. “For example: in our application, we were deciding between using a group field or a modular block for the product usage instructions. The modular block would allow editors to move the instructions to any place in the AR content display. However, because we would only ever need one set of instructions, and the single set would need to be mandatory, we went with the group field. It has most of the benefits of a modular block without the unnecessary features like multiple instances. “On the flip side, we had originally considered using a simple drop-down to choose product categories. In a non-headless system, this would be par for the course since the editor needs to be able to pick between many options for each product. With a headless system, we can do better and use reference fields. This lets us create a whole new content type for the categories where we can store their names as well as additional information like descriptions, links, and images. We then let the editor reference that field in the product content type. If we need a new category added to the list, we don’t have to change the content model directly, which would require a higher level of access in the system that could break other processes. We simply create a new entry of the category content type and it will automatically be available to all product entries.” Step 3: Input the content for the AR experience into the CMS With decisions on the content types made, it was time to build out and populate our content model. To do that, we had to create some serums! We did this by taking inspiration from the real serums that we researched in step 1, and coming up with some ingredient combinations and usage scenarios of our own.  We entered the content data into the CMS. This part was pretty straightforward, since we were following the model that we had already laid out. The bonus aspect of this is that now, when a brand wants to build out an AR experience like this for their products, the content modeling has already been done. So we’ve got a template to work with in the future (of course, customized to their particular use case). Below, you can see some examples from the live stack! Step 4. Querying the database The last step was figuring out how to get data out of Contentstack and into the AR experience. Contentstack has two ways to retrieve data via our Content Delivery Network (CDN), and the team wanted to test both of them. So Valtech wrote a quick sample that pulled down the data we entered (as JSON) from each in turn. They decided to use the new GraphQL API because of the simplicity of queries, and because it returned fewer data properties. They then added an additional function to process the response JSON to simplify the object structure — removing extra nesting on reference field JSON, rearranging how the data was organized in the response from the API — so that it was more easily and efficiently consumed by the AR code they were already writing. Designing what the live experience will look like Following last week’s progress on creating sketches and comps for how to display the AR information around the product bottle, this week Svante (our designer) worked on figuring out what the whole AR experience will look like. That meant going beyond the “augmented” part of information display and marrying that with the “reality” side of things. For Scenario 1, shopping in the store, we created a way to hone in on a particular product while in a brightly-lit, colorful shop. As you can see in the graphics, the idea was to darken and blur the background (more on how we developed this below) and zero in on exactly the product that the customer wants to see more information about. For Scenarios 2 and 3, a similar “darkening” effect was applied so it would be easier to see the displayed information no matter what kind of colorful or distracting bathroom the user might be accessing the experience in! Then it was over to the developers to figure out how to actually make all of this happen. Developing the live interaction This week, the development focused on three major elements of the AR experience that we need to nail down for this POC: Finalizing what the fiducial markers will look like, Figuring out exactly how we’re going to track those markers to create the best user experience, and Figuring out how the AR elements will be displayed, including the background dimming effect 1. Fiducial markers: smaller & customized Last week we figured out that fiducial markers (those black square things) would work best for this POC as they were the easiest for our AR framework to latch onto. But we also want our product to be as pretty as a skin care label usually is, so we tried to see if we could shrink those markers down for more design flexibility. The standard size is 1 inch, and we were able to get them down to 0.5 inch and still have them tracking the bottle movement - in all 3 axes - really well. We also tested creating custom markers, which is of course going to be crucial for designing stylish skin care bottles. These also worked - in fact, in some cases they worked better than the standard markers. Custom “umbrella” fiducial marker. 2. What’s the most user-friendly way to display AR content in response to markers in motion? We tested different ways of spinning and tilting the bottle to display what was being shown on-screen. Alex Olivier explains that her main concern - other than supporting natural hand movement - was to lower the risk of the marker getting lost. “In many AR experiences, the content disappears entirely if the marker is lost for a second, which I think is a mistake,” she says. For this reason, the most compelling motion they found for the bottle-as-controller was a rotation around its own axis. A big decision point at this stage was how to display the content that would be controlled by rotating the bottle to detect multiple markers. The team created a system to have keyframe rotations around a 3D layout and then animated / interpolated as different markers were detected. “We had to dust off our trig books!” says Alex. Using this rotation motion (instead of a back-and-forth tilt, for instance), we are lowering the risk of losing the marker, allowing the content to persist in a natural way, and making it more likely that the final user experience will be seamless. 3. Maximizing AR element visibility for a content-rich AR experience Here’s something we learned about content-rich AR experiences, from Alex: “Displaying text (and doing it beautifully) is difficult in computer graphics. You need text to look good at multiple scales and at multiple distances and from multiple angles! That’s why we ended up generating a signed distance field font, which is a bitmap font (but a special one) that uses signed distance fields to beautifully raster text. (You can read more about it here.) “The other thing about text in 3D graphics is that unless you’ve written yourself some handy library, you’re having to do all of the content layout manually. There are a few basic features that were available to us (e.g. alignment of text), but a lot of the work involved flat-out building the layouts that Svante had designed and calculating where to put text & writing functions that could generalize this so it wasn’t 100% hard-coded. If you’re used to slinging CSS or using nice built-in iOS features, you may not appreciate the effort that goes into text in graphics… and now you know why you rarely see text-rich AR apps!”The last element we built out this week was making Svante’s cool darkened-background design come to life. Alex explains, “to do a blur, the most efficient way to do it is usually to use a “shader”, which is a program you run on a graphics card. You take a texture or an image and you pass it through that shader, where all the pixels get transformed. “There were some tricks to this for plugging everything involved in this into AR.js via A-frame: for example, making sure the blurred area is always the same size as the webcam screen, which involved transforming those vertices to be a certain size. It wasn’t necessarily difficult - but it was a lot of things to learn in a short amount of time.” Despite these challenges, we were able to get this working by the end of week two, which was a win! P.S. Tip for all AR developers: ngrok.io turned out to be invaluable for helping us test things out on our phones. Before we discovered it, running code on the phone required a pretty complex choreography of copying over security certificates. ngrok lets you run an HTTPS server on your local computer that can be easily accessed from anyone on the internet, with the proper security settings for AR to work, which made testing so much faster. Check out Week 3: It all comes together! The pieces we’ve been tracking thus far (content, design, and development) must all integrate with each other into one working demo.

Sep 10, 2020

Augmented Reality Frameworks for an Enterprise Web-Based AR Application

How do you create augmented reality? In the process of building an Augmented Reality proof of concept in under 4 weeks (see details here), the team at Valtech evaluated a series of AR frameworks and software development kits (SDKs) that would enable them to rapidly pull in data from a headless CMS (Contentstack) and display it in an Augmented Reality interface on a phone or tablet web browser. Here is their quick research report. For total beginners to AR (like me), an AR framework is the SDK to merge the digital world on-screen with the physical world in real-life. AR frameworks generally work with a graphics library, bundling a few different technologies under the hood — a vision library that tracks markers, images, or objects in the camera; a lot of math to make points in the camera registered to 3D space — and then hooks to a graphics library to render things on top of the camera view. Which software is best for our web-based Augmented Reality use case? The key considerations for the research were: Speed. The goal was to create a working prototype as fast as possible. Once we were successfully displaying content and had completed an MVP, we could continue testing more advanced methods of object detection and tracking Training custom models Identifying and distinguishing objects without explicit markers Potentially using OCR as a way to identify product names More of a wow-factor The team was agnostic on whether to work with marker or image-tracking -- willing to use whichever was most feasible for our use case. Object tracking - Since the team was not trying to place objects on a real-world plane (like a floor), they realized they may not need all the features of a native iOS or Android AR library (aside from marker tracking) Content display. That said, the framework needed to allow for content to be displayed in a cool and engaging way, even if we didn’t achieve fancy detection methods in 3 weeks Something more dynamic than just billboarded text on video Maybe some subtle animation touches to emphasize the 3D experience (e.g. very light Perlin movement in z plane) Platform. The preference was for a web-based build (not requiring an app installation) Comparing the available AR Frameworks: Marker tracking, object tracking, and platform-readiness Here's an overview of our AR / ML library research notes: AR.js Uses Vuforia* Cross-browser & lightweightProbably the least-effort way to get started Offers both marker & image tracking. Image tracking uses NFT markers. Platforms: Web (works with Three.js or A-Frame.js) Zappar WebAR Has SDK for Three.js. SDK seems free; content creation tools are paid Image tracking only Platforms: Web (Three.js / A-Frame / vanilla JS); Unity; C++ ARKit Not web-based Image tracking is straightforward, but can’t distinguish between two similar labels with different text Offers both marker & image tracking Platforms: iOS Argon.js Uses Vuforia Has a complex absolute coordinate system that must be translated into graphics coordinates. No Github updates since 2017. Offers both marker & image tracking Platforms: Works in Argon4 browser Web XR Primarily for interacting with specialized AR/VR hardware (headsets, etc.) XR.plus Primarily an AR content publishing tool to create 3D scenes Google MediaPipe (KNIFT) Uses template images to match objects in different orientations (allows for perspective distortion.) You can learn more here. Marker and image tracking: Yes, sort of...even better. KNIFT is an advanced machine learning model that does NFT (Natural Feature Tracking), or image tracking -- the same as AR.js does, but much better and faster. It doesn't have explicit fiducial markers tracking, but markers are high-contrast simplified images, so it would handle them well, too.  Platforms: Just Android so far, doesn't seem to have been ported to iOS or Web yet Google Vision API - product search Create a set of product images, match a reference image to find the closest match in the set. Cloud-based. May or may not work sufficiently in real-time? Image classification Platforms: Mobile / web Google AutoML (Also option for video-based object tracking) Train your own models to classify images according to custom labels Image classification Platforms: Any Ml5.js Friendly ML library for the web. Experimented with some samples that used pre-trained models for object detection. Was able to identify “bottles” and track their position. Object detection Platforms: Web p5xr AR add-on for p5. Uses WebXR. Platforms: Seems geared towards VR / Cardboard * Vuforia is an API that is popular among a lot of AR apps for image / object tracking. Their tracking technology is widely used in apps and games, but is rivaled by modern computer vision APIs - from Google, for example Graphics Library Research Under the hood, browsers usually use WebGL to render 3D to a <canvas> element, but there are several popular graphics libraries that make writing WebGL code easier. Here's what we found in our graphics library research: Three.js WebGL framework in Javascript. Full control over creating graphics objects, etc., but requires more manual work. Examples: Github Repo A-Frame.js HTML wrapper for Three.js that integrates an entity-component system for composability, as well as a visual 3D inspector. Built on HTML / the DOM Easy to create custom components with actions that happen in a lifecycle (on component attach, on every frame, etc.) Examples: Github Repo PlayCanvas WebGL framework with Unity-like editor Could be convenient for quickly throwing together complex scenes. You can link out a scene to be displayed on top of a marker, or manually program a scene. Potentially less obvious to visualize / edit / collaborate / see what’s going on in code if you use an editor and publish a scene. Slightly unclear how easy it is to dynamically generate scenes based on incoming data / how to instantiate a scene with parameters Examples: Github Repo Recommendations for this project Here is what we decided to go with for our AR demo. Start with AR.js (another option was Zappar) + A-Frame.js for a basic working prototype In the longer term, explore options for advanced object recognition and tracking Read more about determining the best way to do marker tracking; narrowing down the use case and developing the interaction design; and content modeling for AR in our full coverage of week one of development.

Sep 08, 2020

Augmented Reality for Retail: From Concept to Game Plan (Week 1/3 of Development)

The team at Valtech is building a Contentstack-powered Augmented Reality proof of concept in 4 weeks. Week One: AR Framework & User Research, Marker Tracking, Content Modeling, and Interaction Design If you’re just joining us, you can find a summary of week zero (how we got this far) here. Today we’re covering week one, the goal of which was to define everything needed to accomplish the POC. The concept so far: We are building an application that will take some complex information from a beauty/skincare product and make it easier to understand through augmented reality and personalization. Experience and Interaction Design Before any development work could begin, our concept had to be translated into isolated problem statements which could then be made into tasks to fill out our three one-week sprints. This meant it was time for another brainstorming session. What experience are we creating? The team spent 3 hours on Zoom and in their Miro board with the goal of hammering out the following: What problem are we solving for customers? What specifically are we going to demonstrate in our POC? What data are we going to display? What is the interaction model? 1. What problem are we solving for customers? What task do we want our users to be able to accomplish? What are the user needs? For many at Valtech, this step was a rapid onboarding into the world of skincare. First, the team took a look at some major skincare retailers to get an idea of the basic taxonomy of skincare products: What do they call things, and how do they classify them? They also did some user research: a quick internal Google Forms survey that aimed to identify what the biggest skincare questions, concerns, and needs were among real people who might use this kind of app. Based on these two research questions, the team found the following: there is very little variation in the way products are categorized (cleansers, exfoliators, moisturizers, etc., came up over and over again as product category descriptors), and people are generally overwhelmed with the amount of undifferentiated information thrown at them by skincare products and brands. In other words, though you might know you need a cleanser, moisturizer, and sunscreen, that still doesn’t tell you which one works best for you; whether the ingredients will help or harm you personally, or interact poorly with each other; or even how much of each to use, when, and in what order. So there was definitely an unmet information simplification need here. Check. 2. What specifically are we going to demonstrate in our POC? What products are we going to work with for scanning & information display? Here, the Valtech team pulled in some beauty & skincare subject matter experts that they found within the company. They identified the different steps that go into a skincare routine: Cleanser - to clean the skin Toner - an astringent to shrink the pores and change pH Serum - which nobody could explain, beyond “something magical with vitamins” Moisturizer - to prevent the skin from drying out Sunblock - to protect from the damaging effects of the sun Big insight #1: people are especially confused about a particular category of skincare products. Based on this, the team decided that for the purposes of this demo, the specific example they would zero in on would be helping people to navigate selecting and using a serum, since this is the product that they could find the least clarity on (and therefore, could reasonably surmise that the information needs for this product would be immediately obvious to the biggest number of people). What on earth is a serum? 3. What data are we going to display? At the root of this next question is one that the team assures me they keep coming back to over and over again: How are we actually going to make this useful? Explains Jason, “if people are just looking at words, then it’s essentially just a website brochure. We want users to be able to interact with this in a way that can help them accomplish the tasks they need to accomplish.” In the case of figuring out what to do with a serum, the team identified the following information needs that could arise for our POC: Concentration of serum — do I need 5% or 2% “active ingredient” (eg. Vitamin C)? Usage recommendations — how do I use it, and where does it fit into my routine (in which order, how many times per week)? Product recommendations — what are other products that go along with this serum (e.g. the next step in the suggested skincare regimen?) 4. What is the interaction model? How does the user interface with the system? Looking at the usage story so far, the team mapped out the following:  Someone wants to buy a serum from a particular brand. They want to know which product is recommended for them (based on — for this POC — a pre-existing “profile” with preferences, current routine, etc. already known), how to use it, and whether at some point the products they are using need to change in any way (e.g. concentration, increase sunblock or moisturizer, etc.) This is when the team hit on… Big insight #2: this service will be the most useful if we stick to one product over time. Up until this point, the idea had been to make an app that helps to choose between products in-store, and have it offer several kinds of interactions depending on what kind of help you were looking for. But the results of the research and brainstorming showed that with skincare, there isn’t necessarily a need to constantly keep shopping for new ones. Consumers have a desire to select a product that is guaranteed to do what they want to accomplish at that point in time (e.g. reduce wrinkles, moisturize dry skin, protect from the sun) and then understand exactly how to make that happen once they take it home. The questions don’t stop once you leave the store with the product in-hand. There is still a lot to understand about making this product most effective for me, in my routine, right now. So, the team decided to build 3 interaction scenarios that would show just that — personalization of information about one skincare product over time. What exactly will we build? Interaction Scenarios I didn’t know what interaction design was, so I asked Svante Nilson, Senior Designer. It’s basically: How we want users of the application to consume the AR content we are producing, as well as designing the look and feel of that content. Or in other words: What's that AR experience going to look like and feel like? What's going to show up on your phone, what's going to display around the product? How's it going to display? How are you going to interact with that? And why would people want to use this? (There’s that #1 question again.) And then repeating that over the different kinds of interactions: in the store and at home. Sketching comps The team zeroed in on three scenarios that they wanted to build out, and Svante got to work on designing them as pencil sketches. He would then run these past the engineers to determine feasibility, and adjust as needed, until they arrived at interactions that seemed easy, useful, and possible to build quickly. Scenario I: At the store Differentiate between multiple bottles on a shelf. AR information here can include things like reviews, cost and affordability, ingredients from the perspective of allergic reactions or sustainability, and any other things that might make the product stand out to you to make you want to purchase it. In this scenario, you are scanning the shelf with your phone. You are not holding any products in your hands, so you are able to tap and interact with the augmented reality information laid out around the product using your free hand. This is what you can see being worked out in the sketch below. Scenario II: At home, first time using the product Once home, receive AR onboarding to using this product: things like frequency per day and usage steps. Here, instead of holding your device (phone or tablet) at a distance from products that are on a shelf, you’re holding the product in one hand and your device in the other hand. Your interactions with the AR display will have to be in the real world, using the product itself as a controller. Think rotating the product, or swiping up and down on the surface of the bottle, to see additional information. Below are early sketches of these interactions. Scenario III: At home, after a while After you’ve been using the product for a few months, your needs for information will change. You may want to progress to another product concentration or another product in the line; your frequency of use of this product may need to be adjusted. You may also want to leave a review. To facilitate these needs, the interaction model and visual layout can stay the same, while prioritizing other information in the AR experience itself. In the sketches below you can see a benefit of using the bottle as a controller: this naturally allows for adding “tabs” with additional personalized information and notifications (e.g.: the humidity index in your area is low; use additional moisturizer with this product; or: you’ve been using this product for 3 months, time to think about changing the concentration.) By focusing on just one product and one product line, from one brand, we are not only narrowing our scope to be able to complete the project in this tight timeline. We are also making it more applicable to an enterprise retail use case for Augmented Reality: one of helping a skincare brand tell their story across several interactions, and eventually, products. Below, you can see the current mock-up that came from this sketch interaction design process. Early preview of the real interaction and label Content Modeling Identifying and populating the data that needs to be stored and accessed As the identified scenarios make clear, there is a lot of information that our AR demo will need to access. Some of it will be dynamic, like personalized product recommendations or changing concentrations of the active ingredient over time. Some will be static: brand names, product lines, ingredients. All of this will need to be stored in Contentstack in a manner that makes it both easy to query, and easy to edit or modify. This process is called content modeling, and we will cover it in detail in Week 2. Development On the development side, the team also started with some research. Before anything can be built in Augmented Reality, there are a number of parameters that need to be defined. It’s not too different from a website or app project. You need to define language, database, framework, (for us: AR framework and graphics libraries) and any other parameters specific to the project. For us, that meant determining how our AR application will identify the object that’s in front of it, as well as how it will “know” the bottle is being used as a controller.  I. AR Frameworks and Graphics Libraries Augmented reality development is somewhat of an uncharted territory. While there are a host of SDKs available for developers wanting to build AR experiences, there aren’t all necessarily enterprise-grade, cross-platform, or even production-ready. So the first step for developer Alex Olivier was to do her homework: evaluate the available AR frameworks and graphics libraries to determine which of these would fit our criteria: suitable for a web AR experience (not requiring a native app installation), and as close as we could get to something that a business might actually use to build this kind of application for their own brand. For the curious: the research is documented here. The TL;DR is that we chose to go with AR.js (as the best option for building AR for mobile web browsers), Three.js (WebGL framework in JavaScript) and A-frame.js (a framework for Three.js that lets you write HTML-like elements to compose a 3D scene, and also provides a visual 3D inspector.) The next challenge was to get these tools to bend to our will.  Our goal was to be able to track a (serum) bottle’s movement in such a way that our application could determine its position and behave a certain way in response. Or more simply, for the first test case: If the bottle tilts to the right or the left, change something. II. Spatial coordinates and marker tracking for using the bottle as a controller AR.js library — Where is the marker? As the team started working with AR.js midweek, they hit a few road bumps. Danielle notes, “The biggest challenge with the AR library is ensuring the content appears where we want it to appear, which is the biggest challenge for any AR application!” They started with Natural Feature Tracking (NFT) in AR.js but noticed issues with the alignment between the image and 3D object overlaid. They then looked into how the coordinate system was set up in AR.js, which led them to discover another underlying issue around the virtual camera: AR.js likes to position the camera or the marker at the origin of the coordinate system. It has different modes for whether the camera is fixed or in motion, which can affect how it tracks multiple markers. Essentially, the coordinate system in AR.js is set up to look at objects where either the markers are stationary or the virtual camera is stationary and has trouble when both are moving around.  Marker tracking and fiducial markers to identify object motion We tested a couple of different markers to make it easier for AR.js to find the serum bottle. QR codes were especially interesting as these are in common use today. However, ultimately the far better performing markers turned out to be fiducial markers. Explains Jason, “Fiducial markers are black and white images which look kind of like a QR code but are simpler and have a black square bar around them, and have any kind of rotationally asymmetrical symbol in the middle so the computer can tell which way that square is turned. These have been used in AR for a long time, so there is a lot of solid code around how to deal with them.” Fiducial marker Three.js and A-frame to Act When Motion is Detected As a last step, we tested what happens when we try to tell AR.js to recognize the rotation of the bottle. Under the hood, AR.js leverages the Three.js WebGL framework, and there's another framework called A-Frame (by Google) that can be used with both of them to quickly write HTML-like markups to describe your scene. The team built a custom attribute for A-Frame elements that triggered a rotation event when the bottle is tilted left or right in front of the camera. … And it worked! In the video below, you can see that as the bottle is turned, the attribute that we created is looking at the acceleration rate and which way it’s turning, and when it determines that it’s tilted, it switches the image in the middle to blue. So we’ve got an interaction using the bottle as a controller, which is pretty great! Next week: learn how we will pull in data from Contentstack to populate the AR interactions, the benefits of a headless system for creating AR experiences, and our path towards building real scenarios and views, using final assets! Read the Week Two Project Spyglass update now.

Sep 01, 2020

How to Concept and Pitch an Augmented Reality Demo (in 1 Week or Less)

Week Zero: Getting to the Pitch Sometime in the beginning of summer, the Contentstack marketing team called up Valtech, and asked them to build an Augmented Reality (AR) demo on top of our CMS. We caught Pascal Lagarde (VP Commerce) and Auke van Urk (CTO) in a good mood. They said yes. Then everyone went on summer holidays. Until about 2 weeks ago, when Pascal called us back. He said: “We’ll build you an AR demo. And we’re going to do it in the next 4 weeks.” This is the story of how they did it, told (almost) live. Today, what happened in Week Zero: how the development team at Valtech went from receiving our somewhat vague brief to pitching us two sharply defined concepts a week later. We’ll even be sharing the actual pitch deck. (It’s at the bottom of this post.) Getting the Brief Jason Alderman is a senior engineer at Valtech, but he used to work designing interactive exhibits in museums. One of his favorite projects was a donation machine for a museum lobby, which was a giant glass porthole attached to a set of sails. When the machine detected a donation bill, it would suck it up through a snaking tube into the porthole, which would then activate a sensor that would make the sails blow as if in the wind. He’s excited about the possibilities of Augmented Reality. “I like the connection between the physical and the digital world. Right now we're holding up these small pieces of metal and glass up to our faces and moving them around like a magic window. The technology is still evolving. I'm really interested to see what the end result will be.” Jason was the first team member to get tasked with responding to “the brief” which was, admittedly, a somewhat rough Google Doc where a few Contentstack people had traded ideas with a few Valtech people along the lines of “could it look like Minority Report?” and “it needs to be interesting for marketers and developers alike”. This was the actual brief. Jason is positive about this experience, telling me: “We were given a lot of creative free rein. That's one of the things I love about this company — they really invest in the people and let them run with their ideas.” He planned a workshop with a few other developers, UX researchers, and experience designers. “We figured that we probably needed to get as many perspectives inside the company as we could and brainstorm things.” Identifying the Parameters: Why Contentstack? 1. IDENTIFY HOW A HEADLESS CMS WILL BE USEFUL IN AN AR CONTEXT Contentstack is a Content Experience Platform (CXP) with a headless content management system (CMS) at the core. It’s essentially a highly user-friendly database and environment for content creation and storage (text, media, or otherwise) with powerful APIs and integration capabilities that allow that content to easily be delivered to any kind of channel or environment. Traditionally, content management systems have been used to power the web, but today the demand for content-rich experiences is significantly more diverse. Beyond web and the mobile web and even app, brands need content to exist in an atomic form, ready to be delivered in an optimized and personalized way to digital billboards, point of sale terminals, social media, marketing automation systems — and yes, Virtual Reality and Augmented Reality experiences. Valtech is one of the founding partners together with Contentstack of the MACH Alliance, which is a governing and educational body promoting a new standard for enterprise architecture: Microservices, API-first, Cloud-native SaaS, and headless. Says Jason, “It's a way of having an enterprise CMS that can feed all sorts of different front-ends from mobile apps to react apps.” 2. LIST KNOWN STRENGTHS OF CONTENTSTACK CMS The Valtech team made a list of all the strengths of the Contentstack platform that could be highlighted in an AR demo, which looked like this (see more of this in the pitch deck at the end of this post). The strengths of Contentstack for AR demo, as identified by Valtech. Detailed content models can be structured easily to feed websites, apps, and of course, AR. Internationalization: robust multilingual support, including fallback languages — for instance, if there is no content for a given channel in Mexican Spanish you can rollback to general Spanish content. Robust ability to set up workflows — easily configuring layered steps comprising different actions (approval, commenting, adding elements) that can be set up to automatically push to the next stage. Tremendous capability for personalization through powerful integration with tools such as Optimizely or Dynamic Yield. Isolating the Task: Why AR? AR is hot right now. But the team that took our brief wasn’t a pure AR team. It was a group of people who know how to build experiences and augment them with technology in order to make them either useful, or really fun, or both. Given the brief of delivering content-rich experiences pulled from a headless CMS, their first question was "are we sure the best way to accomplish showing off this CMS is through AR?" 1. WHAT ARE THE BENEFITS AND USEFUL APPLICATIONS OF AUGMENTED REALITY? Along with Jason, leading the brainstorm efforts was Danielle Holstine, Delivery Manager — a software engineer turned project manager — who spent ten years developing AR and VR technology. She sees potential for AR in everything: “To experience VR you currently have to put this big thing on your face and it's like blinders — you can't see anything else around you. AR, on the other hand, uses what you're already seeing and just adds information on top of it, so it's additive.” Especially interesting is the potential of web-based AR and the ability to move away from native apps, which makes these experiences more accessible and easier to engage with. “Phone manufacturers like Apple and Samsung have been investing in the hardware required to do augmented reality functions: improving cameras, sensors, all those kinds of things. And equally on the software side, there's been a lot of development on browser-based AR so it no longer requires a dedicated application to make use of your camera and the sensors on your phone, but rather being able to access the information through just a browser.” But UX researcher and designer Hayley Sikora had questions. “Knowing that we’re working with an amazing CMS and that the brief was to convey information through it, my question was, why are we doing it in AR? Because it’s very difficult to get large amounts of information across in AR.” Britt Midgette, Sr. Experience Designer, agrees. “We can’t just do AR because it’s cool. It must enhance the experience in a needful way. VR is a different thing — you are creating worlds, there’s "no reason” to do that — but it’s fun, and you can add a lot of stuff in that world. You can still show people a lot of things in an AR world but really — why?! Some things should be static. AR can just get in the way of what people are trying to do.” 2. FRAME THE BRAINSTORM TO SERVE THE OPPORTUNITY The resolution came from framing the question in a storytelling narrative: Since Augmented Reality is layering information on top of the real world, hopefully to make things easier and provide context, there are industries that have complex information, which can be simplified or explained, personalized, and delivered through an AR experience. The team (Jason, Danielle, Hayley, Britt, Pascal, and engineers Alex Olivier and Brian Harrington) then broke down this narrative into its component parts and discussed each in turn. The goal was to come up with 1-2 strong concepts that could be presented to Contentstack in a pitch the following Monday BRAINSTORM Q1. What are industries that have complex information? The team used Miro as a digital whiteboard. The Miro board with dot voting star stickers. The ideas did not start out clustered together, but rather as a brain-dump of all kinds of industries that have complex information that might be difficult to understand, or that people might need some help digging through and figuring out what is relevant. Some of the ideas included: Vitamins, health, skincare, beauty products Medicine & pharmaceuticals Software documentation, technology College admissions Insurance, credit cards, finance Real estate, apartment hunting Outdoor equipment, travel Home goods, auto parts, instruction manuals (and IKEA) The team plotted it all out in a grid of post-its, then clustered it into meaningful groups, then voted on their favorites. The two industries that seemed to be the most popular were skincare & beauty and museums & education. What are industries that have complex information? Miro board brainstorming. That was the first part of the narrative: There are industries that have complex information, which can be simplified or explained, personalized and delivered through an AR experience. The next step was to identify the kinds of information that could be simplified and explained in the two most favoured industries. BRAINSTORM Q2: Given the industries “beauty & skincare” and “museums & education”, what is their complex information? The questions that people had around beauty and skincare came naturally to many people in the room, like Hayley, who admits, “I have so many questions about what goes into my own skincare regimen.” Ideas listed included: Ingredients: How can I understand the composition of this product? Are there known allergens in this? How have these ingredients been sourced? Benefits: What is actually healthy, versus just a “scam”? What is this product promising to do, and how can I track whether it’s actually working? Reviews: Can I see a rating or review? Who recommends this product? Are there influencers that have covered it? When it came to museums & education, Hayley was inspired by the experience of her aunt, who recently decided to homeschool her children: “I was thinking that it would be a really amazing opportunity to provide kids across the world with some interactive learning tools that could, first of all, give their parents a break from having to be their homeschool teachers 100% of the time — but also give them some fun ways to learn this content." Ideas for museum & educational complex information included: Learning management: Tracking systems for grades, assessments, progress Additional context: Who was the creator of an artwork? What are narratives behind certain artifacts which give them context, beyond just the names and dates? Details: Virtually dissect a dinosaur skeleton — pull out different bones and see where they were found, what they were for, and how they evolved. Media: Sound clips, 3D models, music (instrument types, styles) Provenance: How did the artifact get to the museum? Where was it originally created; what hands did it pass through; will it be, or has it been repatriated to the original cultures or people to whom it belongs? What is the kind of complex information that we could work with? Here the team had fleshed out the second part of the narrative: There are industries that have complex information, which can be simplified or explained, personalized and delivered through an AR experience.  The final piece of the puzzle was personalization. BRAINSTORM Q3: How can we personalize this information? Jason explains that without personalization, any content experience, AR-enhanced or otherwise, is just a bundle of information. The benefit of using technology to represent content in a dynamic format like AR is that it can be personalized, made highly relevant and specific to the person accessing that information. Adds Hayley, “Personalization is only going to continue to get more important. The newest generation is seeking more personalized material than ever because they get instant gratification all day long with personalized content that is sent to them on their social media feeds, so they're expecting that out of other channels as well.” How could personalization be used to de-complexify the types of information that we identified in beauty & skincare and museums & education? Beauty & Skincare: Ingredients: Which of these ingredients will help me achieve my goals? Recommendations: Based on your purchase history, preferences; hide products that might cause an allergic reaction or are otherwise incompatible with your personal history. Upload a “shelfie” and get an analysis of how this would fit into your existing routine. Face scans: Similar to other Valtech projects showing makeup on someone’s face “live”, can products be recommended based on a scan of your face? Phone a friend: Are there reviews I can see from people I know, or from elsewhere online? Can we support or mimic the social buying experience? Museums & Education: Game mechanics: Tour, scavenger hunt, quiz Social dynamics: Tether two people virtually to join in a trivia battle, or to share the experience in a personal way Responsive content: Dynamically generating a layout of a physical space to match your preferred experience, such as drawing a “map” for you personally to follow through a museum exhibit Avatars: To protect kids’ privacy, instead of putting in all of their own personal information into the app, can they create an avatar that represents their preferences and personality traits? Text to speech: Keeping in mind that a lot of content stored in Contentstack CMS is text-based, could text-to-speech be implemented to create a personalized audio tour experience using existing written content? How can we personalize this information? There are industries that have complex information, which can be simplified or explained, personalized and delivered through an AR experience. This was the end of the brainstorming session, where two strong concepts had emerged to be taken into the pitch presentation. Can It Be Done? From here, the final question was, can this be done in our timeline: 3 weeks from this point on?  Here’s Danielle: “We knew we had three weeks, which is a very short time, to implement something this complex. A traditional two-week sprint process obviously isn't going to cut it for this. This work needs to move so rapidly that we don't have extended periods of time to wait, to have something blocked, those kinds of things. “So as the brainstorm team was talking, I sketched out a three, one-week-sprint plan with rough goals for each of those weeks. “The first week is really focused on nailing down the technology we're going to use. So what are the AR libraries that we're going to use? How are we going to track the items? Are we going to do it with fiducial markers, are we going to do it with image-based markers, are we going to do it with object tracking... Each of those has an increasing level of complexity. So we need to make that decision really soon. The next step was nailing down our interaction models and what we want the experience to be. “Then the second week goal is going to be focused on really hard development: making the application, getting the data into Contentstack, and getting the data back out and visualized the way that we want it in the AR space. “And then the third week would be really focused on polishing and refining. So, the intention is between the first week and the second week, to actually have our proof of concept — a working thing that we can send around to everybody to test and manipulate, get some feedback on it. And then spend that last week editing, adjusting, and refining. And if we have time, adding in some of the many nice-to-haves that we left on the drawing board." The Pitch 1. LOWEST EFFORT, HIGHEST REWARD Based on what they knew they could accomplish in 3 weeks, and that had the highest potential to deliver a “wow”-factor demo, Valtech pitched Contentstack two ideas for an AR proof of concept. 2. PRESENT IN AN EASY-TO-IMAGINE FORMAT Valtech kept the presentation short, and pitched only one slide per concept, complete with hand-drawn illustrations that showed the concept, but made it clear that it was a mere idea, and not a fully living thing. Knowing that it was possible, and armed with a wealth of ideas, here are the two ideas Valtech presented to us. Beauty & Skincare What’s Inside the Bottle? Scan a product on the shelf or at home to get personalized recommendations based on the ingredients in the product. See other products that are similar based on some criteria (feel, effect); products that are different (avoiding allergens, discovering other product lines); learn about sustainability and sourcing of the ingredients; or get instructions (see influencer content on tips and tricks, see usage and recommendations from the brand.) The beauty and skincare concept, with sketch illustration by Jason Alderman and Lindsey Harris Museums & Education Personal AR Audio Tour. In a museum gallery or a simulated at-home environment, receive a personalized museum audio tour using text-to-speech technology, including; paths based on how objects in the museum are related to each other; paths that follow a particular preferred narrative thread or subject; synchronize the audio tour with other devices so users can experience the tour together with family or friends. Museums & education concept, with sketch illustrations by Jason Alderman 3. OFFER RECOMMENDATIONS & GUIDANCE The team also gave some personal guidance on their preference, which was towards the retail app. Says Jason: “I love museums, but we did not think that museum demo would be as effective as one that retailers could translate their business onto more easily.” Hayley adds: “The opportunities in education are almost endless because there’s so much we could make interactive and gamify. The challenge with education and museums is bureaucracy — who actually takes ownership of it? What school system is going to pay to create an AR learning program for their kids? That's just not feasible. So I think taking this down a route where we could be talking about products that can go to a broader consumer audience makes sense.” The Decision On the Contentstack side, me (I’m Varia — Director of Marketing) and my colleague Gal Oppenheimer (Manager, Solutions Architects) immediately gravitated towards the retail and skincare application idea. So that's the application we'll build — and over the next few weeks, we'll share with you exactly what that looks like.  We're calling it Project Spyglass. In the coming weeks, we will show how Gal and his team helped Valtech to build the content models that will help to power this experience from Contentstack. Plus, Valtech’s software engineers research AR frameworks, interaction design storyboards start to take shape, and we wrestle with the surprisingly sticky problem of marker tracking. Read the week 1 post now.  See the full pitch deck presented by Valtech below: Pitch deck: Augmented Reality POC by Valtech and Contentstack from Varia Makagonova

Jun 23, 2020

Why Integration Is No Longer a Dirty Word

An Interview with Matthew Baier, MACH Alliance Board Member  Matthew Baier wants to free enterprises from expensive mistakes.  As co-founder of Built.io, an integration-as-a-service company, he has experienced firsthand how enterprise technology has become impossibly complex and frustrating to buy and to deploy. As all-in-one suites offer more and more, tightly-coupled, functionality, they also lock businesses into a rigid status quo for years at a time. That’s why the team was inspired to re-imagine content management with Contentstack — a modern CMS that turns the concept and implications of traditional software suites on its head. Today, a new era has arrived, built on the promise of a new architecture and backed by an ecosystem of companies that share the belief that bringing together best-in-class technology results in superior business outcomes. The same revolution that is occurring in and around content management is simultaneously happening in other enterprise tech sectors — for instance, in digital commerce; in search; and with technology implementers and integrators. Today, fourteen of these companies have announced the launch of the MACH Alliance: a group of next-generation technology experts committed to liberating companies from all-in-one suites through education, universal standards, and the development of a truly open ecosystem rooted in the principles of “MACH” architecture: Microservices, API-First, Cloud-Native, and Headless. We sat down with Matthew, MACH Alliance board member and CMO of Contentstack, to discuss the launch — and why he’s excited to de-vilify the concept of integration. Contenstack: Since your start in the software industry, how have you seen the world change for enterprises? Matthew Baier: Everything has become unbelievably complex. There was once a world in which your digital audience was just in one place, on your website. Today, you have countless digital channels, social platforms, a myriad of devices. People are not paying attention in the same way or place they once were. The complexity of all these moving pieces is a challenge, but it also forces enterprises to rethink the equation of how they manage that complexity. And that presents an opportunity to think differently and find a better way. Everything is moving faster too. Enterprises have always been at risk from smaller, nimbler players who can outmaneuver them in certain areas. But now there’s technology that in some ways can level the playing field. It allows enterprises to keep up and gain agility, so that this pace doesn’t become frightening and overwhelming, but becomes a tool that they can infuse the business with. How has the relationship between enterprises and vendors changed? What’s really changed is that integration is a superpower that enterprises are beginning to unlock. For a long time, enterprises were at the mercy of large technology suites, where they were forced to choose between being efficient — solving as many problems as possible with one product — and being agile, continuing to innovate in the face of relentless change. These suite vendors have been telling enterprises: “we’re the best at everything, and if you try to mix other things in, it gets complex and scary.” But that’s just not the case anymore. The integration of different technologies from different vendors is no longer something that has to be a cumbersome, costly, lengthy process. It’s become simple, with a whole host of new-wave vendors who, like Contentstack, have built their products with APIs at the very core. So, let’s talk MACH. It seems like a lot of technical concepts tied together: “Microservices, API-first, Cloud-native, and Headless.” But it can also mean speed -- MACH speed. What are the most important things for a business to know when they ask, “What’s MACH?” It is crucial to understand the definitions of MACH, the key concepts that it represents. Here’s why: Remember when “Cloud” first became a thing? It was considered a new revolutionary idea that was a little bit scary and a little bit dangerous. But there was a tipping point when companies like Salesforce made Cloud fine for everyone, including banking, and now it’s generally considered as safe and much more efficient than the legacy, on-premises approach. As soon as that happened, every other vendor also said, “We’re Cloud!” whether or not it was true. It was like everyone had invented Cloud alongside the true pioneers, and that created a problem for enterprises, who were buying what they thought was Cloud, and still finding something they had to install on their servers. The same is happening now across many more dimensions. Like API-first: these days, everyone has an API. But there’s a huge difference between a product designed and built API-first versus slapping an API on ten years after the product was built. And it’s the same with headless. You shouldn’t have to go through the investment and deployment of a technology before you realize that you’ve bought the same thing as before. You should be able to catch it sooner.  The MACH Alliance is a group of companies that want to help organizations understand how to evaluate and test platforms to see if they will truly bring the benefits that they promise. This is a group of businesses that have built products or services aligned with these key concepts, and they don’t want to hog their knowledge — they want to share it and work together to give enterprises the ability to pick the best possible technologies in the category. Together, they will make it easy for enterprises to assemble, support, and never stop innovating in their respective domains. Tell me more about the freedom that is offered by “going MACH.” There are two big factors. First, amplifying the positives: you get faster time to market, more agility, and the ability to evolve and respond to changing environments constantly. You can select the very best products and put them together in a way that’s perfect for your business and unlock yourself from bulky suites. You can get a much lower total cost of ownership (TCO) for your entire technology stack and never again have to deal with costly, manual, disruptive upgrades. All of this results in exciting, new, net positive gains. The second is reducing the negatives: not just cost, but also risk. MACH is as revolutionary as the “undo” button. You can make decisions that don’t punish you for years to come. You can pick a piece of technology that you’re unsure about, and instead of committing to it for the next ten years, you can just test it out. If it works as expected, it’s already there and integrated. And if it doesn’t, you can remove it from the stack without everything falling apart. Why was it important for Contentstack to be a founding member of this alliance? For us at Contentstack, MACH is not just a trending topic. It is the fundamental belief upon which the company was built. Every essential business function has an internal system to manage it. Customer information, for instance, gets managed with CRM; products with supply chain management. But one essential function — the way that you reach your audience, the way that you bring your product and message to them — is content. Content is a trillion-dollar industry investment. And it’s the only area that didn’t have a modern system of record. Many businesses are still struggling with content technology that’s been around for 30 years and remains – at its core – unchanged, while every other area has reinvented itself. So we’ve done it. We rebuilt it, modernized it, and created a truly MACH content solution. And while we’re providing this radically new content approach, we don’t want our customers to be held back by suites in other parts of their business either. If you’re in commerce, you need a commerce engine. Where’s the modern equivalent to what we’re doing with commerce, which is its own huge space? Well, you’ve got wonderful players like commercetools who believe in the same thing. Now, we can make this incredibly powerful combination happen through integration. So is integration the key here? Not just integration itself but also the concept of composable, reusable, "integratable" technology components – that’s the main idea behind the “M” in MACH, microservices. We do everything through integrating microservices and we want our customers and partners to enjoy its benefits too — to feel like it’s easy. It should be as easy as a USB. It’s about more than just having an API — it’s allowing the data to flow between systems and gaining new insights and new efficiencies from that flow. It’s about bringing more power to the fingertips of users without forcing them to dramatically change their processes and behavior. Using Contentstack, you have always been able to connect to any system with an API. If you connect to another MACH system, you’ll find that the integrations are, in many cases, already pre-built, very easy to customize and use. You’ll find technology that’s compatible, well documented and offers rich tooling. And that’s just the developer level experience.  What do easy integrations mean for marketers? A typical content marketer’s experience today is pretty complex: they might have 15 windows open -- writing a blog post, consulting the SEO tool for metadata, translating to another system for publishing — all these pieces have to be stitched together. The person themself is forced to act as the “glue” between systems and manually bridging the “integrations” to make all of this work — how horrible is that? It would be more helpful to open your system of record and have everything at your fingertips. As you step through your day’s tasks, everything you need is served to you. Your blog is not matching your desired tone of voice - fix it right there. Want to publish to a different-language country site? Route it to the automatic translation workflow. Connect to your ecommerce product feed? Here’s the button for that. You can ask how it’s performing compared to last week’s blog and have the analytics right there. Everything is right in front of you. This is a game-changer for businesses. It sounds like a much different reality than content marketers are used to. It’s no surprise that people hate Content Management Systems (CMS) today, pretty much unilaterally, because it’s traditionally been a major contributor to corporate and personal headaches. We believe it should be the other way around: it should be the system that you want to live in day to day because it has the potential of being the most useful. It can make you better at what you do. And this is possible today? Yes. With the MACH Alliance ecosystem, it’s clear that we’re not the only ones who believe this. Many companies have this in deployment today, big and small, boxing in their competitors left, right, and center. We’re not waiting for some magical moment where this will be possible. We’re already there. Learn more about the MACH Alliance at machalliance.org. Read a practical guide to going MACH in the ebook Break the Replatform Cycle with MACH Architecture.