Development of an Augmented Reality Retail Skincare POC: Content Modeling and Interaction Building (Week 2/3)

Week Two: Content modeling for AR; final designs; selecting and programming marker tracking movement patterns and text display parameters.

Welcome to the reality of building an augmented reality demo. This is the second-to-last week of our "live" project documentation (find week zero here, and week one here), and this week we moved away from designs and theory and into hands-on development on all fronts of this project.

In this week’s post, you can read about:

As a summary, we have decided to build the following: A mobile web-browser Augmented Reality (AR) experience to be used with a brand’s skincare products -- for the purposes of this POC, we are focusing on the skincare category of serums. It will help the customer to select the best serum for them in the store; to receive onboarding instructions and personalized recommendations when first using it; and after using it for a while, receive updated recommendations and information.

First up this week: how to actually get all this information into our AR experience.

Headless CMS content modeling for Augmented Reality

In order to provide a content-rich AR experience to our users, a lot of data (brand and product names; product textures; ingredients’ purpose, source, contraindications; usage instructions) must be stored in our CMS (Contentstack) to be easy to query (so it shows up the way we want, at the speed we need, and prepared for personalization), and easy to edit or modify (because products get added; names change; instructions get updated; new ingredient configurations and contraindications happen).

The process of documenting all the types of content you’ll need for an experience (whether AR, VR, mobile app or website) and putting it into logical buckets to ensure your CMS is effectively configured for editing and delivering that content to that experience (or many experiences) is called content modeling. (Here’s a primer we’ve written on this topic.)

With traditional content management systems, which have been designed for building web pages, this is a pretty straightforward process. You basically have a few ways you can organize things: folder structure can reflect your site pages, or it can reflect content types (elements of a webpage like banners, images, forms, text; repeating formats like blog articles, press releases, customer testimonials, and so on). Then it’s just a matter of giving editors page templates that allow them to mix and match these content types within certain identifiable limits. Or in some cases, the CMS even comes with static templates that can’t be customized or made more flexible at all. This is based on the assumption that because there are only a few, relatively predictable ways that this content is going to be used for all customers of that CMS, that it’s easier for everyone to pre-define the content models.

When it comes to headless systems, though, things are a little bit more fluid. Especially for a CMS like Contenstack that was designed to be as un-opinionated as possible about where that content is going to end up. While you can have (and we do provide) lots of solid guidance on specific examples for different industries and use cases, at the end of the day, your content model is going to be hyper-unique to your organizations’ ways of working and ways of delivering your content.

As it turns out, this is actually a good thing when it comes to building out Augmented Reality content models.

Benefits of a headless system for Augmented Reality

Ben Ellsworth, Solutions Architect at Contentstack, says that headless CMS is somewhat of a no-brainer for developing AR experiences precisely because of its flexibility, or lack of opinion about where your content is going to go. He explains: 


"There isn’t a long-standing tradition of AR and VR applications, and there’s no solution that is pre-built for the problems that an enterprise is going to experience when they’re developing for AR. When you’re trying to do something uncharted, you cannot let yourself be limited by something that was built with “websites” in mind.

Contentstack is extremely agnostic to the display and dynamic in the way it relates content to the display layer, so that you can architect the data and the content structure in the best way for where it’s going, no matter what the end goal is.”


“You’re only constrained by the limits of today’s technology,” adds Gal Oppenheimer, Manager, Solutions Architects at Contentstack. “So, in the case of AR: what can the phone browser do, and what can the cameras do? Those are actually our constraints, because that’s where we’re pushing the boundaries in terms of what technology allows us to do today.” 

Content modeling: Identifying, classifying and uploading content

What did content modeling for our AR experience actually look like?

Step 1: What content is there?

 First, we had to figure out all the different kinds of content that it might want to use.

To do that, we had to research some serums so we could know what kind of information exists about them. We found this site particularly useful for discovering the purposes of product ingredients.

giphy_(32).gif


Step 2: Extrapolating - what are the content types that we might need?

In this step, we listed every kind of content that we could identify about skincare products that might be relevant to our purposes. We laid this out in a document with hypotheses for the way that we could structure these in the CMS (text, group, reference, etc.)

The Contentstack team consulted with the Valtech team on how best to structure this content in the most useful way.

Screenshot_2020-09-10_at_19.10.52.png



Sidebar: Flexibility vs Ease of use

The biggest question that comes up when designing content models in headless CMS is whether for a given scenario, more flexibility would be better, or whether some rigidity would actually better serve the end users (editors). Ben explains:

"There is a point of diminishing returns where additional flexibility ends up being detrimental to productivity. When a content creator has access to 1,000 options for structuring a piece of content, they have to make 1,000 decisions every time they create a piece. This is an extreme example but with a headless content management system, the person modeling the content does have the power to create an infinitely flexible system.

“As you model your content, ask yourself why you’re giving the editor the options you are.

“For example: in our application, we were deciding between using a group field or a modular block for the product usage instructions. The modular block would allow editors to move the instructions to any place in the AR content display. However, because we would only ever need one set of instructions, and the single set would need to be mandatory, we went with the group field. It has most of the benefits of a modular block without the unnecessary features like multiple instances.

“On the flip side, we had originally considered using a simple drop-down to choose product categories. In a non-headless system, this would be par for the course since the editor needs to be able to pick between many options for each product. With a headless system, we can do better and use reference fields. This lets us create a whole new content type for the categories where we can store their names as well as additional information like descriptions, links, and images. We then let the editor reference that field in the product content type. If we need a new category added to the list, we don’t have to change the content model directly, which would require a higher level of access in the system that could break other processes. We simply create a new entry of the category content type and it will automatically be available to all product entries.”

Step 3: Input the content for the AR experience into the CMS

With decisions on the content types made, it was time to build out and populate our content model. To do that, we had to create some serums! We did this by taking inspiration from the real serums that we researched in step 1, and coming up with some ingredient combinations and usage scenarios of our own. 

giphy_(31).gif

We entered the content data into the CMS. This part was pretty straightforward, since we were following the model that we had already laid out. The bonus aspect of this is that now, when a brand wants to build out an AR experience like this for their products, the content modeling has already been done. So we’ve got a template to work with in the future (of course, customized to their particular use case). Below, you can see some examples from the live stack!

Step 4. Querying the database

The last step was figuring out how to get data out of Contentstack and into the AR experience. Contentstack has two ways to retrieve data via our Content Delivery Network (CDN), and the team wanted to test both of them. So Valtech wrote a quick sample that pulled down the data we entered (as JSON) from each in turn. They decided to use the new GraphQL API because of the simplicity of queries, and because it returned fewer data properties. They then added an additional function to process the response JSON to simplify the object structure — removing extra nesting on reference field JSON, rearranging how the data was organized in the response from the API — so that it was more easily and efficiently consumed by the AR code they were already writing.


Designing what the live experience will look like

Following last week’s progress on creating sketches and comps for how to display the AR information around the product bottle, this week Svante (our designer) worked on figuring out what the whole AR experience will look like. That meant going beyond the “augmented” part of information display and marrying that with the “reality” side of things.

For Scenario 1, shopping in the store, we created a way to hone in on a particular product while in a brightly-lit, colorful shop. As you can see in the graphics, the idea was to darken and blur the background (more on how we developed this below) and zero in on exactly the product that the customer wants to see more information about.

Screenshot_2020-09-10_at_19.51.22.png


For Scenarios 2 and 3, a similar “darkening” effect was applied so it would be easier to see the displayed information no matter what kind of colorful or distracting bathroom the user might be accessing the experience in!

Screenshot_2020-09-10_at_19.51.32.png

Then it was over to the developers to figure out how to actually make all of this happen.

Developing the live interaction

This week, the development focused on three major elements of the AR experience that we need to nail down for this POC:

  1. Finalizing what the fiducial markers will look like,
  2. Figuring out exactly how we’re going to track those markers to create the best user experience, and
  3. Figuring out how the AR elements will be displayed, including the background dimming effect
1. Fiducial markers: smaller & customized

Last week we figured out that fiducial markers (those black square things) would work best for this POC as they were the easiest for our AR framework to latch onto. But we also want our product to be as pretty as a skincare label usually is, so we tried to see if we could shrink those markers down for more design flexibility. The standard size is 1 inch, and we were able to get them down to 0.5 inch and still have them tracking the bottle movement - in all 3 axes - really well.

giphy_(27).gif


We also tested creating custom markers, which is of course going to be crucial for designing stylish skincare bottles. These also worked - in fact, in some cases they worked better than the standard markers.

umbrella.png

Custom “umbrella” fiducial marker.

2. What’s the most user-friendly way to display AR content in response to markers in motion?

We tested different ways of spinning and tilting the bottle to display what was being shown on-screen. Alex Olivier explains that her main concern - other than supporting natural hand movement - was to lower the risk of the marker getting lost. “In many AR experiences, the content disappears entirely if the marker is lost for a second, which I think is a mistake,” she says. For this reason, the most compelling motion they found for the bottle-as-controller was a rotation around its own axis.

A big decision point at this stage was how to display the content that would be controlled by rotating the bottle to detect multiple markers. The team created a system to have keyframe rotations around a 3D layout and then animated / interpolated as different markers were detected. “We had to dust off our trig books!” says Alex.

giphy_(30).gif


Using this rotation motion (instead of a back-and-forth tilt, for instance), we are lowering the risk of losing the marker, allowing the content to persist in a natural way, and making it more likely that the final user experience will be seamless.

giphy_(28).gif


3. Maximizing AR element visibility for a content-rich AR experience

Here’s something we learned about content-rich AR experiences, from Alex:

“Displaying text (and doing it beautifully) is difficult in computer graphics. You need text to look good at multiple scales and at multiple distances and from multiple angles! That’s why we ended up generating a signed distance field font, which is a bitmap font (but a special one) that uses signed distance fields to beautifully raster text. (You can read more about it here.)

“The other thing about text in 3D graphics is that unless you’ve written yourself some handy library, you’re having to do all of the content layout manually. There are a few basic features that were available to us (e.g. alignment of text), but a lot of the work involved flat-out building the layouts that Svante had designed and calculating where to put text & writing functions that could generalize this so it wasn’t 100% hard-coded. If you’re used to slinging CSS or using nice built-in iOS features, you may not appreciate the effort that goes into text in graphics… and now you know why you rarely see text-rich AR apps!”

The last element we built out this week was making Svante’s cool darkened-background design come to life. Alex explains, “to do a blur, the most efficient way to do it is usually to use a “shader”, which is a program you run on a graphics card. You take a texture or an image and you pass it through that shader, where all the pixels get transformed.

“There were some tricks to this for plugging everything involved in this into AR.js via A-frame: for example, making sure the blurred area is always the same size as the webcam screen, which involved transforming those vertices to be a certain size. It wasn’t necessarily difficult - but it was a lot of things to learn in a short amount of time.”

Despite these challenges, we were able to get this working by the end of week two, which was a win!

giphy_(29).gif


P.S. Tip for all AR developers: ngrok.io turned out to be invaluable for helping us test things out on our phones. Before we discovered it, running code on the phone required a pretty complex choreography of copying over security certificates. ngrok lets you run an HTTPS server on your local computer that can be easily accessed from anyone on the internet, with the proper security settings for AR to work, which made testing so much faster.

Coming up in Week 3: It all comes together! The pieces we’ve been tracking thus far (content, design, and development) must all integrate with each other into one working demo.

Read More

The New Normal: How to Translate the Luxury Retail Experience into Digital

Not long ago, 24 “anti-laws” of marketing detailed principles — like “forget about positioning,” “make it difficult for clients to buy,” and “do not sell openly on the internet” — that counterintuitively helped luxury brands set themselves apart and command high prices.

Then, in June 2017, LVMH (a French conglomerate that owns luxury brands like Louis Vuitton, Moët & Chandon, Christian Dior, and more) launched their multi-brand ecommerce website. And that seemed to put the final nail in the coffin of the traditional marketing strategies that governed the luxury retail experience.

Technology has changed everything. Today, most retailers are living on the same plane — the digital plane. The plane where ecommerce giants like Amazon have taught consumers that they can get pretty much anything worth wanting quickly and for a competitive price. The plane where a great digital retail experience is the norm.

Here’s how the times are changing for luxury retailers, and strategies that decision-makers in the luxury retail space can implement to translate their unique brand into a digital format. 

Digital is the New Normal for Luxury Retail

The average age for a luxury consumer has fallen dramatically from 48 to 34, substantially impacting shopping practices. 

Deloitte found that that 42% of luxury purchases made by Millennials are entirely digital. And a report from McKinsey found that today’s typical luxury consumer takes an omnichannel approach to shopping. They may seek advice from peers offline or on social media, read reviews on various blogs, and finally complete their purchase in-store or directly from the retailer’s website.

By 2019, consumers were already spending $37 billion online on personal luxury goods every year. By 2025, 25% of the luxury industry’s value is going to come from online purchases.

25-percent-of-luxury-Industry-online.png

Consumers are more than ready to spend money on a fully-digital luxury retail experience. Here’s how managers in the luxury retail space can update their strategies to keep up with modern-day demands.

Strategies for Taking Your Luxury Retail Experience Online

Luxury retailers face a new generation of technology and shoppers that require them to go omnichannel, adopt personalization, and implement other digital strategies that we’ll talk about today if they want to keep their position in the global market.  

Take Advantage of the Online Purchasing Experience

One of the best ways retailers can provide a luxurious experience and retain customer loyalty in the digital age is by taking a direct-to-consumer (DTC) approach.

Selling direct to consumers is exactly what it sounds like — your brand selling your products directly to your customers, so you get to control the experience instead of any third parties or other intermediaries.

This approach empowers brands to continue to share their heritage, enforce their bonds with loyal customers, and provide value through the luxurious experience they’ve always excelled at in-person — just using a different digital platform.

The digital storefront on which you choose to host your online shopping experience should include modern touches that make the shopping experience feel luxurious, including flawless functionality across desktop and mobile devices, advanced search capabilities, real-time availability updates, a multi-media presentation that enables shoppers to experience products in various ways, and easy-to-use customization features for products that can be personalized.

Commercetools is an example of a cloud-based ecommerce platform built on a microservices architecture making it scalable and easy to integrate into the rest of your digital business systems — an important feature that we’ll talk about later.

Create Intimate Digital Connections Through Personalization

Whether in-store or online, no luxury shopper wants to be anonymous to the brands they patronize. They have a name, unique preferences, brand experiences, and a purchase history that they expect you to utilize in your communications with them. Fostering a connection with your brand isn’t just a nice “perk” — it’s a significant factor in revenue and profit performance.

MBLM’s “Brand Intimacy Study 2019” found that the same U.S. brands that had built strong intimate connections with consumers over the previous decade significantly outperformed the top brands in both the Fortune 500 and S&P indices when it came to revenue and profit. And the same study — which analyzed 6,200 consumer responses and 56,000 brand evaluations — ranked the luxury industry 14 out of the 15 industries when it comes to fostering a personal connection with consumers.

Ouch! The silver lining? There’s room to improve. Enter personalization.

consumers-want-personalization.png


Deloitte’s 2017 report “Global Powers of Luxury Goods” found that almost half of luxury retail shoppers desire personalization. What does that look like in practice? It could be making a product recommendation based on an occasion you know the consumer has coming up. This personalization triggers an offer email based on a consumer’s behavior and preferences. And perhaps, the email displays dynamic call-to-action buttons tailored to use language to which the customer is likely to respond (like their name).

Now, how do you deliver this kind of personalization at scale? That’s a trick only technology can deliver — technology that we’ll detail later in this article.

Fashion Engaging Post-Purchase Communications

In the “UK E-Commerce Shipping Study 2020: Fashion Edition,” parcelLab found that only one out of the ten leading British luxury fashion brands followed up with their customers right after completing a transaction — leaving post-purchase communication solely in the hands of the delivery company which, as many of us have experienced, can quickly lead to miscommunication and disappointment. 

In the retail industry, the average email open rate is just under 14%, but the average open rate for post-purchase emails is a whopping 60%. Engaging consumers in post-purchase communication is a relatively straightforward strategy with a big payoff for managers looking to translate the luxury retail experience into the digital realm.

Conducting proactive outreach in the window right after a transaction allows sellers to continue the immersion in the luxury experience, strengthen brand loyalty, share tips on taking care of their new purchase, provide any shipping updates, answer questions, recommend complementary products, and, of course, set the tone for a long relationship full of future transactions.

Does the thought of adding an all-new online shopping platform, personalization, and post-sale communication to your to-do list have your head spinning? Then keep reading to learn about the technology that can help.

How Headless CMS Technology Can Help You Deliver on the New Luxury Retail Norms

According to the founder of the Customer Experience Group and luxury industry expert Christophe Caïs, shoppers in the luxury retail space often value the brand experience more than the actual products they purchase. And we already know that in 2020, customer experience has become a more critical brand differentiator than price and products themselves.

It’s evident that luxury retail brands absolutely must prioritize bringing the above personalization, post-purchase communication, and online sales strategies to life. A headless content management system (CMS) platform can help automate content creation and synchronize all the tools you need to build a shopping journey that lives up to consumer expectations.

A headless CMS is an API-first content management platform, which means it’s built from the ground up using application programming interface (API) technology that allows microservices, applications, and other systems to function together. This structure makes it easy for content teams to create, optimize, and distribute content to every consumer channel you serve.

It also enables retail managers to integrate ecommerce platforms, CRMs, CDPs, translation tools, localization platforms, and everything else they need to empower online shopping, personalization, post-sale communication, and any other strategies to help bring the luxury retail experience online.

headless-cms-graphic-sm.png

To learn more about using a headless CMS instance to build the system that you need to create luxurious digital customer journeys, download a free copy of Gartner’s “Elevate Your Horizontal Portal to a Digital Experience Platform.”

For more insight on the intersection of headless CMS and the retail space, read “The Omnichannel Technology You Need to Navigate a Fragmented Retail Market” and the very timely “The Digital Secrets of Retailers Who Are Thriving Right Now.”

And to get a better feel for how headless CMS may work in your retail organization before you go all-in, get in touch with the Contentstack team today, and we’ll set you up with your own full-access, no-obligation trial run.

Read More

Augmented Reality Frameworks for an Enterprise Web-Based AR Application

How do you create augmented reality?

In the process of building an Augmented Reality proof of concept in under 4 weeks (see details here), the team at Valtech evaluated a series of AR frameworks and software development kits (SDKs) that would enable them to rapidly pull in data from a headless CMS (Contentstack) and display it in an Augmented Reality interface on a phone or tablet web browser. Here is their quick research report.

For total beginners to AR (like me), an AR framework is the SDK to merge the digital world on-screen with the physical world in real-life. AR frameworks generally work with a graphics library, bundling a few different technologies under the hood — a vision library that tracks markers, images, or objects in the camera; a lot of math to make points in the camera registered to 3D space — and then hooks to a graphics library to render things on top of the camera view.

Which software is best for our web-based Augmented Reality use case?

The key considerations for the research were:

  • Speed. The goal was to create a working prototype as fast as possible. Once we were successfully displaying content and had completed an MVP, we could continue testing more advanced methods of object detection and tracking
    • Training custom models
    • Identifying and distinguishing objects without explicit markers
    • Potentially using OCR as a way to identify product names
    • More of a wow-factor
  • The team was agnostic on whether to work with marker or image-tracking -- willing to use whichever was most feasible for our use case.
  • Object tracking - Since the team was not trying to place objects on a real-world plane (like a floor), they realized they may not need all the features of a native iOS or Android AR library (aside from marker tracking)
  • Content display. That said, the framework needed to allow for content to be displayed in a cool and engaging way, even if we didn’t achieve fancy detection methods in 3 weeks
    • Something more dynamic than just billboarded text on video
    • Maybe some subtle animation touches to emphasize the 3D experience (e.g. very light Perlin movement in z plane)
  • Platform. The preference was for a web-based build (not requiring an app installation)

Comparing the available AR Frameworks: Marker tracking, object tracking, and platform-readiness

Here's an overview of our AR / ML library research notes:

AR.js

  • Uses Vuforia*
  • Cross-browser & lightweight
    Probably the least-effort way to get started
  • Offers both marker & image tracking. Image tracking uses NFT markers.
  • Platforms: Web (works with Three.js or A-Frame.js)


Zappar WebAR

  • Has SDK for Three.js.
  • SDK seems free; content creation tools are paid
  • Image tracking only
  • Platforms: Web (Three.js / A-Frame / vanilla JS); Unity; C++


ARKit

  • Not web-based
  • Image tracking is straightforward, but can’t distinguish between two similar labels with different text
  • Offers both marker & image tracking
  • Platforms: iOS


Argon.js

  • Uses Vuforia
  • Has a complex absolute coordinate system that must be translated into graphics coordinates. No Github updates since 2017.
  • Offers both marker & image tracking
  • Platforms: Works in Argon4 browser


Web XR

  • Primarily for interacting with specialized AR/VR hardware (headsets, etc.)


XR.plus

  • Primarily an AR content publishing tool to create 3D scenes


Google MediaPipe (KNIFT)

  • Uses template images to match objects in different orientations (allows for perspective distortion.) You can learn more here.
  • Marker and image tracking: Yes, sort of...even better. KNIFT is an advanced machine learning model that does NFT (Natural Feature Tracking), or image tracking -- the same as AR.js does, but much better and faster. It doesn't have explicit fiducial markers tracking, but markers are high-contrast simplified images, so it would handle them well, too. 
  • Platforms: Just Android so far, doesn't seem to have been ported to iOS or Web yet


Google Vision API - product search

  • Create a set of product images, match a reference image to find the closest match in the set.
  • Cloud-based. May or may not work sufficiently in real-time?
  • Image classification
  • Platforms: Mobile / web


Google AutoML (Also option for video-based object tracking)

  • Train your own models to classify images according to custom labels
  • Image classification
  • Platforms: Any


Ml5.js

  • Friendly ML library for the web. Experimented with some samples that used pre-trained models for object detection. Was able to identify “bottles” and track their position.
  • Object detection
  • Platforms: Web


p5xr

  • AR add-on for p5. Uses WebXR.
  • Platforms: Seems geared towards VR / Cardboard

* Vuforia is an API that is popular among a lot of AR apps for image / object tracking. Their tracking technology is widely used in apps and games, but is rivaled by modern computer vision APIs - from Google, for example

Graphics Library Research

Under the hood, browsers usually use WebGL to render 3D to a <canvas> element, but there are several popular graphics libraries that make writing WebGL code easier. Here's what we found in our graphics library research:

Three.js

  • WebGL framework in Javascript. Full control over creating graphics objects, etc., but requires more manual work.
  • Examples: Github Repo


A-Frame.js

  • HTML wrapper for Three.js that integrates an entity-component system for composability, as well as a visual 3D inspector. Built on HTML / the DOM
  • Easy to create custom components with actions that happen in a lifecycle (on component attach, on every frame, etc.)
  • Examples: Github Repo


PlayCanvas

  • WebGL framework with Unity-like editor
  • Could be convenient for quickly throwing together complex scenes. You can link out a scene to be displayed on top of a marker, or manually program a scene. Potentially less obvious to visualize / edit / collaborate / see what’s going on in code if you use an editor and publish a scene.
  • Slightly unclear how easy it is to dynamically generate scenes based on incoming data / how to instantiate a scene with parameters
  • Examples: Github Repo


Recommendations for this project

Here is what we decided to go with for our AR demo.

  • Start with AR.js (another option was Zappar) + A-Frame.js for a basic working prototype
  • In the longer term, explore options for advanced object recognition and tracking

Read more about determining the best way to do marker tracking; narrowing down the use case and developing the interaction design; and content modeling for AR in our full coverage of week one of development.

Read More

Augmented Reality for Retail: From Concept to Game Plan (Week 1 / 3 of Development)

Week One: AR Framework & User Research, Marker Tracking, Content Modeling, and Interaction Design

The team at Valtech is building a Contentstack-powered Augmented Reality proof of concept in 4 weeks.

If you’re just joining us, you can find a summary of week zero (how we got this far) here. Today we’re covering week one, the goal of which was to define everything needed to accomplish the POC.

The concept so far: We are building an application that will take some complex information from a beauty / skincare product and make it easier to understand through augmented reality and personalization.


Experience & Interaction Design

Before any development work could begin, our concept had to be translated into isolated problem statements which could then be made into tasks to fill out our three one-week sprints. This meant it was time for another brainstorming session.

What experience are we creating?

The team spent 3 hours on Zoom and in their Miro board with the goal of hammering out the following:

  1. What problem are we solving for customers?
  2. What specifically are we going to demonstrate in our POC?
  3. What data are we going to display?
  4. What is the interaction model?

1. What problem are we solving for customers? What task do we want our users to be able to accomplish? What are the user needs?

For many at Valtech, this step was a rapid onboarding into the world of skincare. First, the team took a look at some major skincare retailers to get an idea of the basic taxonomy of skincare products: What do they call things, and how do they classify them?

They also did some user research: a quick internal Google Forms survey that aimed to identify what the biggest skincare questions, concerns, and needs were among real people who might use this kind of app.

Based on these two research questions, the team found the following: there is very little variation in the way products are categorized (cleansers, exfoliators, moisturizers, etc., came up over and over again as product category descriptors), and people are generally overwhelmed with the amount of undifferentiated information thrown at them by skincare products and brands.

In other words, though you might know you need a cleanser, moisturizer, and sunscreen, that still doesn’t tell you which one works best for you; whether the ingredients will help or harm you personally, or interact poorly with each other; or even how much of each to use, when, and in what order. So there was definitely an unmet information simplification need here. Check.


2. What specifically are we going to demonstrate in our POC? What products are we going to work with for scanning & information display?

    Here, the Valtech team pulled in some beauty & skincare subject matter experts that they found within the company. They identified the different steps that go into a skincare routine:

    1. Cleanser - to clean the skin
    2. Toner - an astringent to shrink the pores and change pH
    3. Serum - which nobody could explain, beyond “something magical with vitamins”
    4. Moisturizer - to prevent the skin from drying out
    5. Sunblock - to protect from the damaging effects of the sun

    BIG INSIGHT #1. PEOPLE ARE ESPECIALLY CONFUSED ABOUT A PARTICULAR CATEGORY OF SKINCARE PRODUCTS.

    Based on this, the team decided that for the purposes of this demo, the specific example they would zero in on would be helping people to navigate selecting and using a serum, since this is the product that they could find the least clarity on (and therefore, could reasonably surmise that the information needs for this product would be immediately obvious to the biggest number of people).

    Screenshot_2020-09-07_at_12.53.32.png

    What on earth is a serum?

    3. What data are we going to display?

    At the root of this next question is one that the team assures me they keep coming back to over and over again: How are we actually going to make this useful?

    Explains Jason, “if people are just looking at words, then it’s essentially just a website brochure. We want users to be able to interact with this in a way that can help them accomplish the tasks they need to accomplish.”

    In the case of figuring out what to do with a serum, the team identified the following information needs that could arise for our POC:

    • Concentration of serum — do I need 5% or 2% “active ingredient” (eg. Vitamin C)?
    • Usage recommendations — how do I use it, and where does it fit into my routine (in which order, how many times per week)?
    • Product recommendations — what are other products that go along with this serum (e.g. the next step in the suggested skincare regimen?)

    4. What is the interaction model? How does the user interface with the system?

    Looking at the usage story so far, the team mapped out the following: 

    Someone wants to buy a serum from a particular brand. They want to know which product is recommended for them (based on — for this POC — a pre-existing “profile” with preferences, current routine, etc. already known), how to use it, and whether at some point the products they are using need to change in any way (e.g. concentration, increase sunblock or moisturizer, etc.) This is when the team hit on…

    BIG INSIGHT #2. THIS SERVICE WILL BE THE MOST USEFUL IF WE STICK TO ONE PRODUCT OVER TIME.

    Up until this point, the idea had been to make an app that helps to choose between products in-store, and have it offer several kinds of interactions depending on what kind of help you were looking for.

    But the results of the research and brainstorming showed that with skincare, there isn’t necessarily a need to constantly keep shopping for new ones. Consumers have a desire to select a product that is guaranteed to do what they want to accomplish at that point in time (e.g. reduce wrinkles, moisturize dry skin, protect from the sun) and then understand exactly how to make that happen once they take it home. The questions don’t stop once you leave the store with the product in-hand. There is still a lot to understand about making this product most effective for me, in my routine, right now.

    So, the team decided to build 3 interaction scenarios that would show just that — personalization of information about one skincare product over time.

    What exactly will we build?

    Interaction Scenarios

    I didn’t know what interaction design was, so I asked Svante Nilson, Senior Designer.

    It’s basically: How we want users of the application to consume the AR content we are producing, as well as designing the look and feel of that content.

    Or in other words: What's that AR experience going to look like and feel like? What's going to show up on your phone, what's going to display around the product? How's it going to display? How are you going to interact with that? And why would people want to use this? (There’s that #1 question again.)

    And then repeating that over the different kinds of interactions: in the store and at home.

    Sketching comps

    The team zeroed in on three scenarios that they wanted to build out, and Svante got to work on designing them as pencil sketches. He would then run these past the engineers to determine feasibility, and adjust as needed, until they arrived at interactions that seemed easy, useful, and possible to build quickly.

    Scenario I: At the store

    Differentiate between multiple bottles on a shelf. AR information here can include things like reviews, cost and affordability, ingredients from the perspective of allergic reactions or sustainability, and any other things that might make the product stand out to you to make you want to purchase it.

    Screenshot_2020-09-09_at_14.43.28.png

    In this scenario, you are scanning the shelf with your phone. You are not holding any products in your hands, so you are able to tap and interact with the augmented reality information laid out around the product using your free hand. This is what you can see being worked out in the sketch below.

    Screenshot_2020-09-09_at_14.43.33.png

    Scenario II: At home, first time using the product

    Once home, receive AR onboarding to using this product: things like frequency per day and usage steps.

    Here, instead of holding your device (phone or tablet) at a distance from products that are on a shelf, you’re holding the product in one hand and your device in the other hand. Your interactions with the AR display will have to be in the real world, using the product itself as a controller. Think rotating the product, or swiping up and down on the surface of the bottle, to see additional information. Below are early sketches of these interactions.

    sketch_3.png


    Scenario III: At home, after a while

    After you’ve been using the product for a few months, your needs for information will change. You may want to progress to another product concentration or another product in the line; your frequency of use of this product may need to be adjusted. You may also want to leave a review.

    To facilitate these needs, the interaction model and visual layout can stay the same, while prioritizing other information in the AR experience itself. In the sketches below you can see a benefit of using the bottle as a controller: this naturally allows for adding “tabs” with additional personalized information and notifications (e.g.: the humidity index in your area is low; use additional moisturizer with this product; or: you’ve been using this product for 3 months, time to think about changing the concentration.)

    By focusing on just one product and one product line, from one brand, we are not only narrowing our scope to be able to complete the project in this tight timeline. We are also making it more applicable to an enterprise retail use case for Augmented Reality: one of helping a skincare brand tell their story across several interactions, and eventually, products.

    Below, you can see the current mock-up that came from this sketch interaction design process.

    sketch_5.png

    Early preview of the real interaction and label

    Content Modeling

    Identifying and populating the data that needs to be stored and accessed

    As the identified scenarios make clear, there is a lot of information that our AR demo will need to access. Some of it will be dynamic, like personalized product recommendations or changing concentrations of the active ingredient over time. Some will be static: brand names, product lines, ingredients. All of this will need to be stored in Contentstack in a manner that makes it both easy to query, and easy to edit or modify. This process is called content modeling, and we will cover it in detail in Week 2.

    Development

    On the development side, the team also started with some research. Before anything can be built in Augmented Reality, there are a number of parameters that need to be defined. It’s not too different from a website or app project. You need to define language, database, framework, (for us: AR framework and graphics libraries) and any other parameters specific to the project. For us, that meant determining how our AR application will identify the object that’s in front of it, as well as how it will “know” the bottle is being used as a controller. 

    I. AR Frameworks and Graphics Libraries

    Augmented reality development is somewhat of an uncharted territory. While there are a host of SDKs available for developers wanting to build AR experiences, there aren’t all necessarily enterprise-grade, cross-platform, or even production-ready. So the first step for developer Alex Olivier was to do her homework: evaluate the available AR frameworks and graphics libraries to determine which of these would fit our criteria: suitable for a web AR experience (not requiring a native app installation), and as close as we could get to something that a business might actually use to build this kind of application for their own brand.

    For the curious: the research is documented here.

    The TL;DR is that we chose to go with AR.js (as the best option for building AR for mobile web browsers), Three.js (WebGL framework in JavaScript) and A-frame.js (a framework for Three.js that lets you write HTML-like elements to compose a 3D scene, and also provides a visual 3D inspector.) The next challenge was to get these tools to bend to our will. 

    Our goal was to be able to track a (serum) bottle’s movement in such a way that our application could determine its position and behave a certain way in response. Or more simply, for the first test case: If the bottle tilts to the right or the left, change something.

    II. Spatial coordinates and marker tracking for using the bottle as a controller

    AR.js library — Where is the marker?

    As the team started working with AR.js midweek, they hit a few road bumps.

    Danielle notes, “The biggest challenge with the AR library is ensuring the content appears where we want it to appear, which is the biggest challenge for any AR application!”

    They started with Natural Feature Tracking (NFT) in AR.js but noticed issues with the alignment between the image and 3D object overlaid. They then looked into how the coordinate system was set up in AR.js, which led them to discovering another underlying issue around the virtual camera: AR.js likes to position the camera or the marker at the origin of the coordinate system. It has different modes for whether the camera is fixed or in motion, which can affect how it tracks multiple markers.

    Essentially, the coordinate system in AR.js is set up to look at objects where either the markers are stationary or the virtual camera is stationary, and has trouble when both are moving around. 

    Marker tracking and fiducial markers to identify object motion

    We tested a couple of different markers to make it easier for AR.js to find the serum bottle. QR codes were especially interesting as these are in common use today. However, ultimately the far better performing markers turned out to be fiducial markers.

    Explains Jason, “Fiducial markers are black and white images which look kind of like a QR code but are simpler and have a black square bar around them, and have any kind of rotationally asymmetrical symbol in the middle so the computer can tell which way that square is turned. These have been used in AR for a long time, so there is a lot of solid code around how to deal with them.”

    Fiducial marker

    Three.js and A-frame to Act When Motion is Detected

    As a last step, we tested what happens when we try to tell AR.js to recognize the rotation of the bottle. Under the hood, AR.js leverages the Three.js WebGL framework, and there's another framework called A-Frame (by Google) that can be used with both of them to quickly write HTML-like markups to describe your scene. The team built a custom attribute for A-Frame elements that triggered a rotation event when the bottle is tilted left or right in front of the camera.

    … And it worked!

    In the video below, you can see that as the bottle is turned, the attribute that we created is looking at the acceleration rate and which way it’s turning, and when it determines that it’s tilted, it switches the image in the middle to blue.

    So we’ve got an interaction using the bottle as a controller, which is pretty great!

    Next week: learn how we will pull in data from Contentstack to populate the AR interactions, the benefits of a headless system for creating AR experiences, and our path towards building real scenarios and views, using final assets!

    Read More

    How to Concept and Pitch an Augmented Reality Demo (in 1 Week or Less)

    Building an Augmented Reality POC in 4 Weeks

    Week Zero: Getting to the Pitch

    Sometime in the beginning of summer, the Contentstack marketing team called up Valtech, and asked them to build an Augmented Reality (AR) demo on top of our CMS.

    We caught Pascal Lagarde (VP Commerce) and Auke van Urk (CTO) in a good mood. They said yes. Then everyone went on summer holidays. Until about 2 weeks ago, when Pascal called us back.

    He said: “We’ll build you an AR demo. And we’re going to do it in the next 4 weeks.”

    This is the story of how they did it, told (almost) live.

    Today, what happened in Week Zero: how the development team at Valtech went from receiving our somewhat vague brief to pitching us two sharply defined concepts a week later.

    We’ll even be sharing the actual pitch deck. (It’s at the bottom of this post.)


    Getting the Brief

    Jason Alderman is a senior engineer at Valtech, but he used to work designing interactive exhibits in museums. One of his favorite projects was a donation machine for a museum lobby, which was a giant glass porthole attached to a set of sails. When the machine detected a donation bill, it would suck it up through a snaking tube into the porthole, which would then activate a sensor that would make the sails blow as if in the wind.

    He’s excited about the possibilities of Augmented Reality. “I like the connection between the physical and the digital world. Right now we're holding up these small pieces of metal and glass up to our faces and moving them around like a magic window. The technology is still evolving. I'm really interested to see what the end result will be.”

    Jason was the first team member to get tasked with responding to “the brief” which was, admittedly, a somewhat rough Google Doc where a few Contentstack people had traded ideas with a few Valtech people along the lines of “could it look like Minority Report?” and “it needs to be interesting for marketers and developers alike”.

    Screenshot_2020-08-30_at_13.59.22.png

    This was the actual brief.

    Jason is positive about this experience, telling me: “We were given a lot of creative free rein. That's one of the things I love about this company — they really invest in the people and let them run with their ideas.” He planned a workshop with a few other developers, UX researchers, and experience designers. “We figured that we probably needed to get as many perspectives inside the company as we could and brainstorm things.”

    Identifying the Parameters: Why Contentstack?

    1. IDENTIFY HOW A HEADLESS CMS WILL BE USEFUL IN AN AR CONTEXT

    Contentstack is a Content Experience Platform (CXP) with a headless content management system (CMS) at the core. It’s essentially a highly user-friendly database and environment for content creation and storage (text, media, or otherwise) with powerful APIs and integration capabilities that allow that content to easily be delivered to any kind of channel or environment. Traditionally, content management systems have been used to power the web, but today the demand for content-rich experiences is significantly more diverse. Beyond web and the mobile web and even app, brands need content to exist in an atomic form, ready to be delivered in an optimized and personalized way to digital billboards, point of sale terminals, social media, marketing automation systems — and yes, Virtual Reality and Augmented Reality experiences.

    Valtech is one of the founding partners together with Contentstack of the MACH Alliance, which is a governing and educational body promoting a new standard for enterprise architecture: Microservices, API-first, Cloud-native SaaS, and headless. Says Jason, “It's a way of having an enterprise CMS that can feed all sorts of different front-ends from mobile apps to react apps.”

    2. LIST KNOWN STRENGTHS OF CONTENTSTACK CMS

    The Valtech team made a list of all the strengths of the Contentstack platform that could be highlighted in an AR demo, which looked like this (see more of this in the pitch deck at the end of this post).

    Screenshot_2020-08-30_at_14.31.03.png

    The strengths of Contentstack for AR demo, as identified by Valtech.

    • Detailed content models can be structured easily to feed websites, apps, and of course, AR.
    • Internationalization: robust multilingual support, including fallback languages — for instance, if there is no content for a given channel in Mexican Spanish you can rollback to general Spanish content.
    • Robust ability to set up workflows — easily configuring layered steps comprising different actions (approval, commenting, adding elements) that can be set up to automatically push to the next stage.
    • Tremendous capability for personalization through powerful integration with tools such as Optimizely or Dynamic Yield.


    Isolating the Task: Why AR?

    AR is hot right now. But the team that took our brief wasn’t a pure AR team. It was a group of people who know how to build experiences and augment them with technology in order to make them either useful, or really fun, or both. Given the brief of delivering content-rich experiences pulled from a headless CMS, their first question was "are we sure the best way to accomplish showing off this CMS is through AR?"

    1. WHAT ARE THE BENEFITS AND USEFUL APPLICATIONS OF AUGMENTED REALITY?

    Along with Jason, leading the brainstorm efforts was Danielle Holstine, Delivery Manager — a software engineer turned project manager — who spent ten years developing AR and VR technology. She sees potential for AR in everything: “To experience VR you currently have to put this big thing on your face and it's like blinders — you can't see anything else around you. AR, on the other hand, uses what you're already seeing and just adds information on top of it, so it's additive.” Especially interesting is the potential of web-based AR and the ability to move away from native apps, which makes these experiences more accessible and easier to engage with. “Phone manufacturers like Apple and Samsung have been investing in the hardware required to do augmented reality functions: improving cameras, sensors, all those kinds of things. And equally on the software side, there's been a lot of development on browser-based AR so it no longer requires a dedicated application to make use of your camera and the sensors on your phone, but rather being able to access the information through just a browser.”

    But UX researcher and designer Hayley Sikora had questions. “Knowing that we’re working with an amazing CMS and that the brief was to convey information through it, my question was, why are we doing it in AR? Because it’s very difficult to get large amounts of information across in AR.”

    Britt Midgette, Sr. Experience Designer, agrees. “We can’t just do AR because it’s cool. It must enhance the experience in a needful way. VR is a different thing — you are creating worlds, there’s "no reason” to do that — but it’s fun, and you can add a lot of stuff in that world. You can still show people a lot of things in an AR world but really — why?! Some things should be static. AR can just get in the way of what people are trying to do.”

    2. FRAME THE BRAINSTORM TO SERVE THE OPPORTUNITY

    The resolution came from framing the question in a storytelling narrative:

    Since Augmented Reality is layering information on top of the real world, hopefully to make things easier and provide context, there are industries that have complex information, which can be simplified or explained, personalized, and delivered through an AR experience.

    The team (Jason, Danielle, Hayley, Britt, Pascal, and engineers Alex Olivier and Brian Harrington) then broke down this narrative into its component parts and discussed each in turn.

    The goal was to come up with 1-2 strong concepts that could be presented to Contentstack in a pitch the following Monday.


    BRAINSTORM Q1. What are industries that have complex information?

    The team used Miro as a digital whiteboard.

    Screenshot_2020-08-30_at_14.03.44.png

    The Miro board with dot voting star stickers.

    The ideas did not start out clustered together, but rather as a brain-dump of all kinds of industries that have complex information that might be difficult to understand, or that people might need some help digging through and figuring out what is relevant. Some of the ideas included:

    • Vitamins, health, skincare, beauty products
    • Medicine & pharmaceuticals
    • Software documentation, technology
    • College admissions
    • Insurance, credit cards, finance
    • Real estate, apartment hunting
    • Outdoor equipment, travel
    • Home goods, auto parts, instruction manuals (and IKEA)

    The team plotted it all out in a grid of post-its, then clustered it into meaningful groups, then voted on their favorites. The two industries that seemed to be the most popular were skincare & beauty and museums & education.

    Screenshot_2020-08-30_at_15.31.22.png

    What are industries that have complex information? Miro board brainstorming.

    That was the first part of the narrative: There are industries that have complex information, which can be simplified or explained, personalized and delivered through an AR experience.

    The next step was to identify the kinds of information that could be simplified and explained in the two most favoured industries.

    BRAINSTORM Q2: Given the industries “beauty & skincare” and “museums & education”, what is their complex information?

    The questions that people had around beauty and skincare came naturally to many people in the room, like Hayley, who admits, “I have so many questions about what goes into my own skincare regimen.”

    Ideas listed included:

    • Ingredients: How can I understand the composition of this product? Are there known allergens in this? How have these ingredients been sourced?
    • Benefits: What is actually healthy, versus just a “scam”? What is this product promising to do, and how can I track whether it’s actually working?
    • Reviews: Can I see a rating or review? Who recommends this product? Are there influencers that have covered it?

    When it came to museums & education, Hayley was inspired by the experience of her aunt, who recently decided to homeschool her children: “I was thinking that it would be a really amazing opportunity to provide kids across the world with some interactive learning tools that could, first of all, give their parents a break from having to be their homeschool teachers 100% of the time — but also give them some fun ways to learn this content."

    Ideas for museum & educational complex information included:

    • Learning management: Tracking systems for grades, assessments, progress
    • Additional context: Who was the creator of an artwork? What are narratives behind certain artifacts which give them context, beyond just the names and dates?
    • Details: Virtually dissect a dinosaur skeleton — pull out different bones and see where they were found, what they were for, and how they evolved.
    • Media: Sound clips, 3D models, music (instrument types, styles)
    • Provenance: How did the artifact get to the museum? Where was it originally created; what hands did it pass through; will it be, or has it been repatriated to the original cultures or people to whom it belongs?

    Screenshot_2020-08-30_at_15.49.39.png

    What is the kind of complex information that we could work with?

    Here the team had fleshed out the second part of the narrative: There are industries that have complex information, which can be simplified or explained, personalized and delivered through an AR experience. 

    The final piece of the puzzle was personalization.


    BRAINSTORM Q3: How can we personalize this information?

      Jason explains that without personalization, any content experience, AR-enhanced or otherwise, is just a bundle of information. The benefit of using technology to represent content in a dynamic format like AR is that it can be personalized, made highly relevant and specific to the person accessing that information.

      Adds Hayley, “Personalization is only going to continue to get more important. The newest generation is seeking more personalized material than ever because they get instant gratification all day long with personalized content that is sent to them on their social media feeds, so they're expecting that out of other channels as well.”

      How could personalization be used to de-complexify the types of information that we identified in beauty & skincare and museums & education?

      Beauty & Skincare:

      • Ingredients: Which of these ingredients will help me achieve my goals?
      • Recommendations: Based on your purchase history, preferences; hide products that might cause an allergic reaction or are otherwise incompatible with your personal history. Upload a “shelfie” and get an analysis of how this would fit into your existing routine.
      • Face scans: Similar to other Valtech projects showing makeup on someone’s face “live”, can products be recommended based on a scan of your face?
      • Phone a friend: Are there reviews I can see from people I know, or from elsewhere online? Can we support or mimic the social buying experience?

      Museums & Education:

      • Game mechanics: Tour, scavenger hunt, quiz
      • Social dynamics: Tether two people virtually to join in a trivia battle, or to share the experience in a personal way
      • Responsive content: Dynamically generating a layout of a physical space to match your preferred experience, such as drawing a “map” for you personally to follow through a museum exhibit
      • Avatars: To protect kids’ privacy, instead of putting in all of their own personal information into the app, can they create an avatar that represents their preferences and personality traits?
      • Text to speech: Keeping in mind that a lot of content stored in Contentstack CMS is text-based, could text-to-speech be implemented to create a personalized audio tour experience using existing written content?

      Screenshot_2020-08-30_at_17.06.27.png

      How can we personalize this information?

      There are industries that have complex information, which can be simplified or explained, personalized and delivered through an AR experience.

      This was the end of the brainstorming session, where two strong concepts had emerged to be taken into the pitch presentation.

      Can it be done?

      From here, the final question was, can this be done in our timeline: 3 weeks from this point on? 

      Here’s Danielle: “We knew we had three weeks, which is a very short time, to implement something this complex. A traditional two-week sprint process obviously isn't going to cut it for this. This work needs to move so rapidly that we don't have extended periods of time to wait, to have something blocked, those kinds of things.

      “So as the brainstorm team was talking, I sketched out a three, one-week-sprint plan with rough goals for each of those weeks.

      “The first week is really focused on nailing down the technology we're going to use. So what are the AR libraries that we're going to use? How are we going to track the items? Are we going to do it with fiducial markers, are we going to do it with image-based markers, are we going to do it with object tracking... Each of those has an increasing level of complexity. So we need to make that decision really soon. The next step was nailing down our interaction models and what we want the experience to be.

      “Then the second week goal is going to be focused on really hard development: making the application, getting the data into Contentstack, and getting the data back out and visualized the way that we want it in the AR space.

      “And then the third week would be really focused on polishing and refining. So, the intention is between the first week and the second week, to actually have our proof of concept — a working thing that we can send around to everybody to test and manipulate, get some feedback on it. And then spend that last week editing, adjusting, and refining. And if we have time, adding in some of the many nice-to-haves that we left on the drawing board."

      The Pitch

      1. LOWEST EFFORT, HIGHEST REWARD

      Based on what they knew they could accomplish in 3 weeks, and that had the highest potential to deliver a “wow”-factor demo, Valtech pitched Contentstack two ideas for an AR proof of concept.

      2. PRESENT IN AN EASY-TO-IMAGINE FORMAT

      Valtech kept the presentation short, and pitched only one slide per concept, complete with hand-drawn illustrations that showed the concept, but made it clear that it was a mere idea, and not a fully living thing.

      Knowing that it was possible, and armed with a wealth of ideas, here are the two ideas Valtech presented to us.

      Beauty & Skincare

      What’s inside the bottle?

      Scan a product on the shelf or at home to get personalized recommendations based on the ingredients in the product. See other products that are similar based on some criteria (feel, effect); products that are different (avoiding allergens, discovering other product lines); learn about sustainability and sourcing of the ingredients; or get instructions (see influencer content on tips and tricks, see usage and recommendations from the brand.)

      Contentstack_Valtech_AR_Pitch_EXT2.004.jpeg

      The beauty and skincare concept, with sketch illustration by Jason Alderman and Lindsey Harris

      Museums & Education

      Personal AR Audio Tour.

      In a museum gallery or a simulated at-home environment, receive a personalized museum audio tour using text-to-speech technology, including: paths based on how objects in the museum are related to each other; paths that follow a particular preferred narrative thread or subject; synchronize the audio tour with other devices so users can experience the tour together with family or friends.

      Museums & education concept, with sketch illustrations by Jason Alderman

      3. OFFER RECOMMENDATIONS & GUIDANCE

      The team also gave some personal guidance on their preference, which was towards the retail app. Says Jason: “I love museums, but we did not think that museum demo would be as effective as one that retailers could translate their business onto more easily.”

      Hayley adds: “The opportunities in education are almost endless because there’s so much we could make interactive and gamify. The challenge with education and museums is bureaucracy — who actually takes ownership of it? What school system is going to pay to create an AR learning program for their kids? That's just not feasible. So I think taking this down a route where we could be talking about products that can go to a broader consumer audience makes sense.”

      The Decision

      On the Contentstack side, me (I’m Varia — Director of Marketing) and my colleague Gal Oppenheimer (Manager, Solutions Architects) immediately gravitated towards the retail and skincare application idea. So that's the application we'll build — and over the next few weeks, we'll share with you exactly what that looks like. 

      In the coming weeks, we will show how Gal and his team helped Valtech to build the content models that will help to power this experience from Contentstack. Plus, Valtech’s software engineers research AR frameworks, interaction design storyboards start to take shape, and we wrestle with the surprisingly sticky problem of marker tracking. Read the week 1 post now. 


      See the full pitch deck presented by Valtech below:

      Read More