# Beyond the Basics

### About this export

| Field | Value |
| --- | --- |
| **content_type** | course |
| **platform** | contentstack-academy |
| **source_url** | https://www.contentstack.com/academy/courses/beyond-the-basics |
| **language** | en |
| **product_area** | lytics |
| **learning_path** | standalone |
| **course_id** | beyond-the-basics |
| **slug** | beyond-the-basics |
| **version** | 2026-03-01 |
| **last_updated** | 2026-04-28 |
| **status** | published |
| **keywords** | ["lytics"] |
| **summary_one_line** | Dive into more advanced topics to optimize your use of Lytics and unlock new activations. |
| **total_duration_minutes** | 28 |
| **lessons_count** | 5 |
| **video_lessons_count** | 5 |
| **text_lessons_count** | 0 |
| **linked_learning_path** | standalone |
| **linked_assessment_ref** | LMS_UNCONFIGURED_COURSE_ASSESSMENT |
| **markdown_file_url** | /academy/md/courses/beyond-the-basics.md |
| **generated_at** | 2026-04-28T06:55:34.936Z |
| **intended_audience** | [] |
| **prerequisites** | [] |
| **related_courses** | [] |

> **Academy MD v3** — companion `.md` for Ask AI. Quizzes and graded assessments are **LMS-only**; this file never contains answer keys.

## Course Overview

| Metadata | Value |
| --- | --- |
| Catalog duration | 27m 59s |
| Released (if known) | 2026-03-01 |
| Product area | lytics |

### Description

Dive into more advanced topics to optimize your use of Lytics and unlock new activations.

### Learning objectives

1. Follow each lesson in order.
2. Practice in a training stack using placeholders **YOUR_STACK_API_KEY** and **YOUR_DELIVERY_TOKEN** in local `.env` files only.
3. Validate API responses against the official documentation.

### Topics covered

lytics

## Course structure

```text
beyond-the-basics/
├── 01-lytics-data-flow · video · 120s
├── 02-lytics-javascript-tag · video · 363s
├── 03-jobs-and-authorizations · video · 193s
├── 04-lookalike-models · video · 110s
├── 05-lytics-query-language · video · 893s
```

## Lessons

### Lesson 01 — Lytics Data Flow

<!-- ai_metadata: {"lesson_id":"01","type":"video","duration_seconds":120,"video_url":"https://cdn.jwplayer.com/previews/FQmwlMCw","thumbnail_url":"https://cdn.jwplayer.com/v2/media/FQmwlMCw/poster.jpg?width=720","topics":["Lytics","Data","Flow"]} -->

#### Video details

#### At a glance

- **Title:** Lytics Data Flow - Data Transformation
- **Duration:** 2m
- **Media link:** https://cdn.jwplayer.com/previews/FQmwlMCw
- **Publish date (unix):** 1751879785

#### Streaming renditions

- application/vnd.apple.mpegurl
- audio/mp4 · AAC Audio · 114318 kbps
- video/mp4 · 180p · 246p · 147073 kbps
- video/mp4 · 270p · 370p · 164152 kbps
- video/mp4 · 360p · 494p · 185037 kbps
- video/mp4 · 406p · 556p · 193660 kbps

#### Timed text tracks (delivery)

- **thumbnails:** `https://cdn.jwplayer.com/strips/FQmwlMCw-120.vtt`

#### Transcript

The next place that we're going to look in this process is going to be the LQL. LQL stands for Linux Query Language. It's a transformative language that we use to map your data as it's coming in onto the user profile. This can be very powerful in that it can enrich the downstream tools with additional data that they may not have had beforehand. And what this can also do is allow you to merge data that you wouldn't have otherwise been able to merge onto the user profile. An example of this would be if you had a first name field that existed in multiple different systems upstream of Linux. We can take those multiple fields and actually map them to the same place on the user profile. What this will allow us to do then is create a more educated user profile. And when we use the term educated user profile, what we mean is that it has more sources feeding into it and therefore is less specific to one type of channel or one type of user behavior. The reason that this is super, super important is that's going to give us a more colorful picture of what this user is doing and what different channels they're interacting with throughout your ecosystem. Finally, the last place in this data channel or data process is going to be our audiences. Once the user profiles have been created and mapped to, they will be able to go into audiences that can be built out in the UI of Linux. An audience is just going to be a group of users that have a certain set of criteria associated to them. So if we're going to think about creating an audience of users that have all interacted with a certain ad or a certain pop-up or a certain form, that's going to be the sort of criteria that we're going to want to set on the audience itself. Then anybody that's interacted with that information will fall into this audience, and that audience can then be sent downstream for activation.

#### Subtitles (WebVTT)

```webvtt
WEBVTT

1
00:00:00.000 --> 00:00:05.000
The next place that we're going to look in this process is going to be the LQL.

2
00:00:05.000 --> 00:00:08.000
LQL stands for Linux Query Language.

3
00:00:08.000 --> 00:00:12.000
It's a transformative language that we use to map your data

4
00:00:12.000 --> 00:00:15.000
as it's coming in onto the user profile.

5
00:00:15.000 --> 00:00:20.000
This can be very powerful in that it can enrich the downstream

6
00:00:20.000 --> 00:00:24.000
tools with additional data that they may not have had beforehand.

7
00:00:24.000 --> 00:00:28.000
And what this can also do is allow you to merge data that you wouldn't have

8
00:00:28.000 --> 00:00:32.000
otherwise been able to merge onto the user profile.

9
00:00:32.000 --> 00:00:37.000
An example of this would be if you had a first name field that existed in

10
00:00:37.000 --> 00:00:40.000
multiple different systems upstream of Linux.

11
00:00:40.000 --> 00:00:44.000
We can take those multiple fields and actually map them to the same place

12
00:00:44.000 --> 00:00:46.000
on the user profile.

13
00:00:46.000 --> 00:00:51.000
What this will allow us to do then is create a more educated user profile.

14
00:00:51.000 --> 00:00:54.000
And when we use the term educated user profile,

15
00:00:54.000 --> 00:00:59.000
what we mean is that it has more sources feeding into it

16
00:00:59.000 --> 00:01:03.000
and therefore is less specific to one type of channel

17
00:01:03.000 --> 00:01:05.000
or one type of user behavior.

18
00:01:05.000 --> 00:01:08.000
The reason that this is super, super important is that's going to give us

19
00:01:08.000 --> 00:01:12.000
a more colorful picture of what this user is doing and what different

20
00:01:12.000 --> 00:01:16.000
channels they're interacting with throughout your ecosystem.

21
00:01:16.000 --> 00:01:20.000
Finally, the last place in this data channel or data process

22
00:01:20.000 --> 00:01:22.000
is going to be our audiences.

23
00:01:22.000 --> 00:01:26.000
Once the user profiles have been created and mapped to,

24
00:01:26.000 --> 00:01:30.000
they will be able to go into audiences that can be built out in the UI

25
00:01:30.000 --> 00:01:31.000
of Linux.

26
00:01:31.000 --> 00:01:35.000
An audience is just going to be a group of users that have a certain set

27
00:01:35.000 --> 00:01:38.000
of criteria associated to them.

28
00:01:38.000 --> 00:01:41.000
So if we're going to think about creating an audience of users that have

29
00:01:41.000 --> 00:01:46.000
all interacted with a certain ad or a certain pop-up or a certain form,

30
00:01:46.000 --> 00:01:49.000
that's going to be the sort of criteria that we're going to want to set

31
00:01:49.000 --> 00:01:51.000
on the audience itself.

32
00:01:51.000 --> 00:01:55.000
Then anybody that's interacted with that information will fall into this

33
00:01:55.000 --> 00:02:00.000
audience, and that audience can then be sent downstream for activation.

```

```transcript
<!-- PLACEHOLDER: replace with real transcript before publish if cues were auto-derived from WebVTT -->
[00:00] The next place that we're going to look in this process is going to be the LQL.
[00:05] LQL stands for Linux Query Language.
[00:08] It's a transformative language that we use to map your data
[00:12] as it's coming in onto the user profile.
[00:15] This can be very powerful in that it can enrich the downstream
[00:20] tools with additional data that they may not have had beforehand.
[00:24] And what this can also do is allow you to merge data that you wouldn't have
[00:28] otherwise been able to merge onto the user profile.
[00:32] An example of this would be if you had a first name field that existed in
[00:37] multiple different systems upstream of Linux.
[00:40] We can take those multiple fields and actually map them to the same place
[00:44] on the user profile.
[00:46] What this will allow us to do then is create a more educated user profile.
[00:51] And when we use the term educated user profile,
[00:54] what we mean is that it has more sources feeding into it
[00:59] and therefore is less specific to one type of channel
[01:03] or one type of user behavior.
[01:05] The reason that this is super, super important is that's going to give us
[01:08] a more colorful picture of what this user is doing and what different
[01:12] channels they're interacting with throughout your ecosystem.
[01:16] Finally, the last place in this data channel or data process
[01:20] is going to be our audiences.
[01:22] Once the user profiles have been created and mapped to,
[01:26] they will be able to go into audiences that can be built out in the UI
[01:30] of Linux.
[01:31] An audience is just going to be a group of users that have a certain set
[01:35] of criteria associated to them.
[01:38] So if we're going to think about creating an audience of users that have
[01:41] all interacted with a certain ad or a certain pop-up or a certain form,
[01:46] that's going to be the sort of criteria that we're going to want to set
[01:49] on the audience itself.
[01:51] Then anybody that's interacted with that information will fall into this
[01:55] audience, and that audience can then be sent downstream for activation.
```

#### Lesson text

Get an overview of how data flows in and out of Lytics, and the data transformations that occur inside Lytics.

## Lytics Data Flow

### Introduction

#### What will I learn?

*   What are the main ingress and egress methods?
*   What data transformations take place inside of Lytics?
*   What are Events, LQL, and Audiences?

This guide will give you an overview of the main steps the process of data flowing in and out of Lytics.

**What's Ingress and Egress?**

*   Ingress: Data going into Lytics AKA data import
*   Egress: Data going out of Lytics AKA data export

### Data Ingress

The first step is getting your customer data into Lytics. In this video (2.5 min), we will cover the main data ingress methods at a high-level.

#### Methods to send data to Lytics

Events can be ingested into Lytics through a few methods.

*   SFTP
    *   CSV or JSON
    *   Batch process - can run hourly, daily, monthly
    *   Also can be used for a one-time backfill of historical data
*   Real-time APIs
    *   Lytics JavaScript Tag - real-time connector, capturing onsite user behavior
    *   Custom APIs (Collect endpoints) - powerful option for custom connections and setting the frequency needed for your use case
*   Bulk API
    *   Separate channel to not interrupt the real-time data coming into Lytics

#### Knowledge Check

**What data formats are supported by the SFTP method? Select all that apply.**

A. CSV

B. Free Form

C. JSON

D. XML

Answer: A, C

**The Lytics JavaScript Tag constantly pushes data into Lytics in real time.**

A. True

B. False

Answer: A

### Data Transformation

Once your customer data has been ingested into Lytics, the next step is to **transform** that data so that it can be activated on for your marketing use cases.

In the second part of the "Lytics Data Flow - Data Transformation" video (2 min), we'll give a brief overview of the data transformation process that occurs in Lytics:

*   Lytics Query Language (LQL)
*   User Profiles
*   Audiences

#### Data Transformation in Lytics

**Lytics Query Language (LQL)**: LQL is a transformative language that maps data as it comes into Lytics. LQL enables 2 key capabilities of the Lytics CDP: 

*   **Enrich** data - provide additional data to your downstream tools that wasn't available beforehand.
*   **Merge** data - combining multiple sources of data onto a single user profile.

**User Profiles**: Lytics builds "educated" user profiles, meaning:

*   There are more data sources feeding into a profile (such as web, email, CRM, etc.)
*   Not limited to a specific channel (such as email)
*   Not limited to a specific type of user behavior (such as email opens)

**Audiences**: Audiences are group of users that have certain set of criteria associated with them.

*   For example, users that have interacted with an ad, email, web form, etc.
*   Can be sent downstream for activation

**What does LQL stand for?**

A. Laughing Quite Loudly

B. Lytics Query Language

C. Learning Query Language

Answer: B

**Which of the following are benefits of LQL? Select all that apply.**

A. Map incoming data into user profiles

B. Enrich data with additional information from other sources

C. Remove duplicated fields across data sources

D. Merge data from multiple sources into an individual user profile

Answer: A, B, D

### Data Egress

Once your customer data has been transformed and enriched, you can then send user profile data and audiences from Lytics to your marketing tools for activation.  
This last section of the "Lytics Data Flow - Data Egress" video (3 min) will highlight a few of the main data egress options:

*   Website personalization
*   Email personalization
*   Enriching data in your database or CRM

#### Personalize the web experience based on user behavior

*   Main channel, used the most
*   Real-time, based on audience behavior
*   For example, you can surface different information for first time visitors vs. frequent users

#### Email customers the right content at the right time

*   Evaluate and serve the right campaigns to users based on user behavior and what's relevant to their interests.
*   Send email data (opens, clicks) back into Lytics to further refine audiences

#### Enrich other data sources for downstream activations

*   Send user profiles with rich, behavioral data from Lytics to your CRM, database, or data lake
*   Opens up new opportunities for activation on your customer data that wouldn't have been possible otherwise

**Match the data egress option to its example use case**.

Data enrichment

Show a modal relevant for a first-time site visitor

Web Personalization

Send a product newsletter based on the user's previous purchase

Email Personalization

Add cross-channel information to previously siloed data sources

## Summary and Next Steps

### Recap

As a quick recap, in this Lytics Data Flow we guide we covered:

*   Data Ingress - different methods and different frequencies for sending data to Lytics.
*   Data Transformation - how Lytics merges and enriches data into actionable user profiles and audiences.
*   Data Egress - important downstream tools that you can activate on your customer data.

#### Start with the RIGHT data

Your first instinct may be to send ALL of your data to Lytics. However, our most successful implementations start with just the RIGHT data to execute their key use cases.

### More Resource

#### Academy Courses

*   Lytics JavaScript Tag
*   Lytics Query Language

#### Documentation

*   [How does Lytics work?](https://learn.lytics.com/documentation/product/features/getting-started/how-does-lytics-work)
*   [Onboarding Web Data](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/onboarding-web-data)
*   [Integrated Marketing Tools](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/integrated-marketing-tools)

#### Key takeaways

- Connect **Lytics Data Flow** back to your stack configuration before moving to the next module.
- Capture one concrete artifact (screenshot, Postman call, or code snippet) that proves the step works in your environment.
- Re-read the delivery versus management boundary for anything you changed in the entry model.

### Lesson 02 — Lytics JavaScript Tag

<!-- ai_metadata: {"lesson_id":"02","type":"video","duration_seconds":363,"video_url":"https://cdn.jwplayer.com/previews/d4p068ys","thumbnail_url":"https://cdn.jwplayer.com/v2/media/d4p068ys/poster.jpg?width=720","topics":["Lytics","JavaScript","Tag"]} -->

#### Video details

#### At a glance

- **Title:** Lytics JavaScript Tag
- **Duration:** 6m 3s
- **Media link:** https://cdn.jwplayer.com/previews/d4p068ys
- **Publish date (unix):** 1751881581

#### Streaming renditions

- application/vnd.apple.mpegurl
- video/mp4 · 180p · 248p · 150480 kbps

#### Timed text tracks (delivery)

- **thumbnails:** `https://cdn.jwplayer.com/strips/d4p068ys-120.vtt`

#### Transcript

In this video we're going to cover the JS tag and how to audit the data that's coming in from the JS tag in Linux. So the first place that we're going to start is talking about how data comes in from the web and how to install that JS tag on your site. So there are going to be a couple different ways to install the JS tag so that that data passes in to our data streams within Linux. The first way to install the method is through a tag manager such as Google Tag Manager or Teleum. There's usually going to be the Linux tag in the store itself where you can just select the tag, put your account ID in there, and then install the tag. However, if you have a more custom installation that requires a little bit more finesse to get that information in, you also have the option to manually upload the script onto the web page and add those additional features such as if you have a single page app or if you want to pass additional information in for when that tag loads. Additionally, if you have any extra information that you want to pass to Linux outside of what passes in the default data layer or in the default tag itself, you have options using the JS tag dot send functionality to get that information to pass into Linux. So downstream there's the default information that can pass into the data streams into what we call the default stream, which is just a stream of data coming from the web. You also have the option, as I mentioned before, to use the JS tag dot send method to get additional information into Linux. So both of those can be really, really useful when bringing data into Linux. So the next place that we're going to look or talk about is the data streams. The reason that we start there is that's the first place the data is going to hit when it comes into Linux. So this data stream is going to be the raw values that we see coming in from the web. This doesn't necessarily mean that it's going to be the only the values that we want to bring in map to the profiles, but any values that are available from the JS tag. That could include your default data layer, a more custom data layer that's been set up in your tag, or any custom sends and JSON payloads that are being sent using the JS tag dot send method. In the data streams tag in Linux, you're able to see the raw fields, the frequency of which they come in, and any sort of cardinality with that field. This gives us a ton of information on if that field's coming in, how often that field is coming in, and then if there's any additional information associated to that field. You can also see sample information in the raw data stream of a sample set of what we can expect to see in that value. This is going to be super helpful for data auditing because it will allow us to see if there's any sort of garbage data coming in, information that's not super valuable, nulls, special characters that we don't want to see in that field. This is sort of a window into that raw information. The next place that the data is going to flow is the data schema. So the data scheme is important because it's going to be only fields that we've explicitly mapped in the LQL itself. LQL, as a reminder, stands for a Lytics Query Language. In the Lytics Query file, we are going to pull in information that we want to map to the profile and only information we want to map to the profile. What that means is we may be receiving raw information in the data stream that never gets mapped and never gets surfaced in the UI. That is acceptable and can happen. However, that information will not show up in the data schema. The data schema can be super useful because it's going to talk to you about a couple different things. It's going to tell you what fields we've mapped in the LQL, the data types for those fields, how many users have those fields, if those fields are used in any audiences, and then also if there's any overlap between the different sources and those fields. So for instance, if we have something like an email, that's going to be seen throughout the different sources and that will show up in the data schema as something that's frequently used across different sources. This will also give you a health score or a percentage of how much of your data is actually shared across multiple sources. The final destination of your data, once it's come into the data schema, is going to be the user fields tab in Linux. The reason that the user fields tab is really important is it's going to show you only fields that have showed up in user profiles themselves. If it's in the data schema but not in the user fields, that means that we've mapped the fields in the LQL but we have yet to see them on any user profiles. The user fields are going to show you the type of data that's showing up in that field and then also how much of that data is showing up in that field. It will also show you through visuals what sort of data shows up more frequently in those fields using different types of graphs. Depending on the field, whether it be a Boolean, a String, or an Int, the type of visuals are going to be a little bit different. So that's something to note. From the user fields tab, you can also create audiences. So if you find that there's a field there that you're really really interested in sort of hashing out what it is and how many users have it, how often it's seen, then you can directly go to the audience builder with that field and find out that information. Something really important to remember about both the data schema and the user fields is that they're going to be able to break down per source. So you can see per stream what information is available and what data has been mapped or has also showed up on the user fields themselves.

#### Subtitles (WebVTT)

```webvtt
WEBVTT

1
00:00:00.000 --> 00:00:05.000
In this video we're going to cover the JS tag and how to audit the data that's

2
00:00:05.000 --> 00:00:08.920
coming in from the JS tag in Linux. So the first place that we're going to

3
00:00:08.920 --> 00:00:15.080
start is talking about how data comes in from the web and how to install that JS

4
00:00:15.080 --> 00:00:20.000
tag on your site. So there are going to be a couple different ways to install the JS

5
00:00:20.000 --> 00:00:25.640
tag so that that data passes in to our data streams within Linux. The first way

6
00:00:25.640 --> 00:00:30.240
to install the method is through a tag manager such as Google Tag Manager or

7
00:00:30.240 --> 00:00:35.240
Teleum. There's usually going to be the Linux tag in the store itself where you

8
00:00:35.240 --> 00:00:39.320
can just select the tag, put your account ID in there, and then install the tag.

9
00:00:39.320 --> 00:00:44.000
However, if you have a more custom installation that requires a little bit

10
00:00:44.000 --> 00:00:49.680
more finesse to get that information in, you also have the option to manually

11
00:00:49.680 --> 00:00:54.880
upload the script onto the web page and add those additional features such as if

12
00:00:54.880 --> 00:00:59.680
you have a single page app or if you want to pass additional information in

13
00:00:59.680 --> 00:01:06.880
for when that tag loads. Additionally, if you have any extra information that you

14
00:01:06.880 --> 00:01:11.240
want to pass to Linux outside of what passes in the default data layer or in

15
00:01:11.240 --> 00:01:17.680
the default tag itself, you have options using the JS tag dot send functionality

16
00:01:17.680 --> 00:01:23.200
to get that information to pass into Linux. So downstream there's the default

17
00:01:23.200 --> 00:01:27.640
information that can pass into the data streams into what we call the default

18
00:01:27.640 --> 00:01:32.280
stream, which is just a stream of data coming from the web. You also have the

19
00:01:32.280 --> 00:01:38.040
option, as I mentioned before, to use the JS tag dot send method to get additional

20
00:01:38.040 --> 00:01:43.480
information into Linux. So both of those can be really, really useful when

21
00:01:43.480 --> 00:01:48.400
bringing data into Linux. So the next place that we're going to look or talk

22
00:01:48.400 --> 00:01:52.920
about is the data streams. The reason that we start there is that's the first

23
00:01:52.920 --> 00:01:57.000
place the data is going to hit when it comes into Linux. So this data stream is

24
00:01:57.000 --> 00:02:01.600
going to be the raw values that we see coming in from the web. This doesn't

25
00:02:01.600 --> 00:02:05.520
necessarily mean that it's going to be the only the values that we want to

26
00:02:05.520 --> 00:02:10.080
bring in map to the profiles, but any values that are available from the JS

27
00:02:10.080 --> 00:02:14.720
tag. That could include your default data layer, a more custom data layer that's

28
00:02:14.720 --> 00:02:19.760
been set up in your tag, or any custom sends and JSON payloads that are being

29
00:02:19.800 --> 00:02:25.880
sent using the JS tag dot send method. In the data streams tag in Linux, you're

30
00:02:25.880 --> 00:02:31.840
able to see the raw fields, the frequency of which they come in, and any sort of

31
00:02:31.840 --> 00:02:38.000
cardinality with that field. This gives us a ton of information on if that

32
00:02:38.000 --> 00:02:42.040
field's coming in, how often that field is coming in, and then if there's any

33
00:02:42.040 --> 00:02:46.360
additional information associated to that field. You can also see sample

34
00:02:46.360 --> 00:02:51.760
information in the raw data stream of a sample set of what we can expect to see

35
00:02:51.760 --> 00:02:56.600
in that value. This is going to be super helpful for data auditing because it

36
00:02:56.600 --> 00:03:00.480
will allow us to see if there's any sort of garbage data coming in, information

37
00:03:00.480 --> 00:03:05.080
that's not super valuable, nulls, special characters that we don't want to see in

38
00:03:05.080 --> 00:03:10.800
that field. This is sort of a window into that raw information. The next place that

39
00:03:10.800 --> 00:03:15.800
the data is going to flow is the data schema. So the data scheme is important

40
00:03:15.800 --> 00:03:19.800
because it's going to be only fields that we've explicitly mapped in the LQL

41
00:03:19.800 --> 00:03:26.600
itself. LQL, as a reminder, stands for a Lytics Query Language. In the Lytics

42
00:03:26.600 --> 00:03:31.800
Query file, we are going to pull in information that we want to map to the

43
00:03:31.800 --> 00:03:37.360
profile and only information we want to map to the profile. What that means is we

44
00:03:37.360 --> 00:03:41.720
may be receiving raw information in the data stream that never gets mapped and

45
00:03:41.760 --> 00:03:47.320
never gets surfaced in the UI. That is acceptable and can happen. However, that

46
00:03:47.320 --> 00:03:52.400
information will not show up in the data schema. The data schema can be super

47
00:03:52.400 --> 00:03:56.640
useful because it's going to talk to you about a couple different things. It's

48
00:03:56.640 --> 00:04:00.120
going to tell you what fields we've mapped in the LQL, the data types for

49
00:04:00.120 --> 00:04:05.560
those fields, how many users have those fields, if those fields are used in any

50
00:04:05.560 --> 00:04:10.280
audiences, and then also if there's any overlap between the different sources and

51
00:04:10.280 --> 00:04:14.600
those fields. So for instance, if we have something like an email, that's

52
00:04:14.600 --> 00:04:18.440
going to be seen throughout the different sources and that will show up

53
00:04:18.440 --> 00:04:22.960
in the data schema as something that's frequently used across different

54
00:04:22.960 --> 00:04:27.920
sources. This will also give you a health score or a percentage of how much of

55
00:04:27.920 --> 00:04:33.120
your data is actually shared across multiple sources. The final destination

56
00:04:33.120 --> 00:04:36.960
of your data, once it's come into the data schema, is going to be the user

57
00:04:36.960 --> 00:04:42.040
fields tab in Linux. The reason that the user fields tab is really important is

58
00:04:42.040 --> 00:04:45.720
it's going to show you only fields that have showed up in user profiles

59
00:04:45.720 --> 00:04:51.480
themselves. If it's in the data schema but not in the user fields, that means

60
00:04:51.480 --> 00:04:56.600
that we've mapped the fields in the LQL but we have yet to see them on any user

61
00:04:56.600 --> 00:05:01.160
profiles. The user fields are going to show you the type of data that's showing

62
00:05:01.160 --> 00:05:05.840
up in that field and then also how much of that data is showing up in that

63
00:05:05.840 --> 00:05:10.560
field. It will also show you through visuals what sort of data shows up more

64
00:05:10.560 --> 00:05:15.200
frequently in those fields using different types of graphs. Depending on

65
00:05:15.200 --> 00:05:19.160
the field, whether it be a Boolean, a String, or an Int, the type of visuals

66
00:05:19.160 --> 00:05:23.520
are going to be a little bit different. So that's something to note. From the

67
00:05:23.520 --> 00:05:27.840
user fields tab, you can also create audiences. So if you find that there's a

68
00:05:27.840 --> 00:05:32.200
field there that you're really really interested in sort of hashing out what

69
00:05:32.200 --> 00:05:37.120
it is and how many users have it, how often it's seen, then you can directly go

70
00:05:37.120 --> 00:05:41.680
to the audience builder with that field and find out that information. Something

71
00:05:41.680 --> 00:05:46.240
really important to remember about both the data schema and the user

72
00:05:46.240 --> 00:05:51.200
fields is that they're going to be able to break down per source. So you can see per

73
00:05:51.200 --> 00:05:57.160
stream what information is available and what data has been mapped or has

74
00:05:57.160 --> 00:06:01.480
also showed up on the user fields themselves.

```

```transcript
<!-- PLACEHOLDER: replace with real transcript before publish if cues were auto-derived from WebVTT -->
[00:00] In this video we're going to cover the JS tag and how to audit the data that's
[00:05] coming in from the JS tag in Linux. So the first place that we're going to
[00:08] start is talking about how data comes in from the web and how to install that JS
[00:15] tag on your site. So there are going to be a couple different ways to install the JS
[00:20] tag so that that data passes in to our data streams within Linux. The first way
[00:25] to install the method is through a tag manager such as Google Tag Manager or
[00:30] Teleum. There's usually going to be the Linux tag in the store itself where you
[00:35] can just select the tag, put your account ID in there, and then install the tag.
[00:39] However, if you have a more custom installation that requires a little bit
[00:44] more finesse to get that information in, you also have the option to manually
[00:49] upload the script onto the web page and add those additional features such as if
[00:54] you have a single page app or if you want to pass additional information in
[00:59] for when that tag loads. Additionally, if you have any extra information that you
[01:06] want to pass to Linux outside of what passes in the default data layer or in
[01:11] the default tag itself, you have options using the JS tag dot send functionality
[01:17] to get that information to pass into Linux. So downstream there's the default
[01:23] information that can pass into the data streams into what we call the default
[01:27] stream, which is just a stream of data coming from the web. You also have the
[01:32] option, as I mentioned before, to use the JS tag dot send method to get additional
[01:38] information into Linux. So both of those can be really, really useful when
[01:43] bringing data into Linux. So the next place that we're going to look or talk
[01:48] about is the data streams. The reason that we start there is that's the first
[01:52] place the data is going to hit when it comes into Linux. So this data stream is
[01:57] going to be the raw values that we see coming in from the web. This doesn't
[02:01] necessarily mean that it's going to be the only the values that we want to
[02:05] bring in map to the profiles, but any values that are available from the JS
[02:10] tag. That could include your default data layer, a more custom data layer that's
[02:14] been set up in your tag, or any custom sends and JSON payloads that are being
[02:19] sent using the JS tag dot send method. In the data streams tag in Linux, you're
[02:25] able to see the raw fields, the frequency of which they come in, and any sort of
[02:31] cardinality with that field. This gives us a ton of information on if that
[02:38] field's coming in, how often that field is coming in, and then if there's any
[02:42] additional information associated to that field. You can also see sample
[02:46] information in the raw data stream of a sample set of what we can expect to see
[02:51] in that value. This is going to be super helpful for data auditing because it
[02:56] will allow us to see if there's any sort of garbage data coming in, information
[03:00] that's not super valuable, nulls, special characters that we don't want to see in
[03:05] that field. This is sort of a window into that raw information. The next place that
[03:10] the data is going to flow is the data schema. So the data scheme is important
[03:15] because it's going to be only fields that we've explicitly mapped in the LQL
[03:19] itself. LQL, as a reminder, stands for a Lytics Query Language. In the Lytics
[03:26] Query file, we are going to pull in information that we want to map to the
[03:31] profile and only information we want to map to the profile. What that means is we
[03:37] may be receiving raw information in the data stream that never gets mapped and
[03:41] never gets surfaced in the UI. That is acceptable and can happen. However, that
[03:47] information will not show up in the data schema. The data schema can be super
[03:52] useful because it's going to talk to you about a couple different things. It's
[03:56] going to tell you what fields we've mapped in the LQL, the data types for
[04:00] those fields, how many users have those fields, if those fields are used in any
[04:05] audiences, and then also if there's any overlap between the different sources and
[04:10] those fields. So for instance, if we have something like an email, that's
[04:14] going to be seen throughout the different sources and that will show up
[04:18] in the data schema as something that's frequently used across different
[04:22] sources. This will also give you a health score or a percentage of how much of
[04:27] your data is actually shared across multiple sources. The final destination
[04:33] of your data, once it's come into the data schema, is going to be the user
[04:36] fields tab in Linux. The reason that the user fields tab is really important is
[04:42] it's going to show you only fields that have showed up in user profiles
[04:45] themselves. If it's in the data schema but not in the user fields, that means
[04:51] that we've mapped the fields in the LQL but we have yet to see them on any user
```

#### Lesson text

Learn about the Lytics JavaScript tag and how it passes information from your website into Lytics user profiles.

## Video Tutorial

### What will I learn?  

*   What default data and events are collected?
*   How raw event inputs are sent through the [Data Streams](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/data-streams).
*   How LQL mapping creates the [Data Schema](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/schema-audit).
*   How [User Fields](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/user-fields) are stored in the user profile.

Watch the "Lytics JavaScript Tag" video (6 mins) for an introduction to the Lytics JavaScript tag and how it passes information from your website into Lytics user profiles.

### LQL Mapping

The video briefly mentions LQL mapping functions. See the Lytics Query Language course for more details.

## Installing the Lytics JS Tag

To find the Lytics JavaScript tag in the Lytics UI, click your Account Name at the bottom of the lefthand navigation menu and click [JavaScript Snippet](https://app.lytics.com/connect?view=v3) from the expanded menu.

Make sure to use Version 3 of the Lytics tag. See the documentation for more information: 

*   [Installation & Configuration](https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/using-version-3/installation-configuration)
*   [Version 3 documentation](https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/version-3-improvements)

You'll probably find it easier to install the Lytics JS Tag using a Tag Manager, like Google Tag Manager. See details at: [Working with Tag Managers](https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/working-with-tag-managers)

## Knowledge Check

Where can raw data fields be seen in the Lytics UI? Data Streams

**Map the section of the data flow with the data points involved**

Data Schema

Includes mapped data on user profiles

Data Stream

Includes mapped fields, even if no data

User Fields

Includes raw values coming in from the web

**Which data stream contains web data collected by the Lytics JavaScript Tag?**

A. Experiences stream

B. Default stream

C. Custom stream

Answer: B

**Where can you find the percentage of user data being used in audiences?**

A. Data Streams

B. User Fields

C. Schema Audit

Answer: C

**New fields added via the jstag.send method will automatically be added to a user's profile.**

A. True

B. False

Answer: False - raw event data still needs to be mapped using Lytics Query Language (LQL) through the Data Schema first.

## More Resources

### Documentation

Here are some recommended resources to continue learning about the Lytics JS Tag.

*   [Lytics JavaScript Tag introduction](https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/introduction)
*   [Onboarding Web Data](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/onboarding-web-data)
*   [Collecting Data with V3 Tag](https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/using-version-3/collecting-data)
*   [Troubleshooting: Verifying Data is sent to Lytics](https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/troubleshooting#verify-data-is-being-sent-to-lytics)
*   [Installing Lytics Image Pixel](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/onboarding-web-data#lytics-image-pixel)

#### Key takeaways

- Connect **Lytics JavaScript Tag** back to your stack configuration before moving to the next module.
- Capture one concrete artifact (screenshot, Postman call, or code snippet) that proves the step works in your environment.
- Re-read the delivery versus management boundary for anything you changed in the entry model.

### Lesson 03 — Jobs and Authorizations

<!-- ai_metadata: {"lesson_id":"03","type":"video","duration_seconds":193,"video_url":"https://cdn.jwplayer.com/previews/igYBULhs","thumbnail_url":"https://cdn.jwplayer.com/v2/media/igYBULhs/poster.jpg?width=720","topics":["Jobs","and","Authorizations"]} -->

#### Video details

#### At a glance

- **Title:** Jobs And Authorizations
- **Duration:** 3m 13s
- **Media link:** https://cdn.jwplayer.com/previews/igYBULhs
- **Publish date (unix):** 1751882661

#### Streaming renditions

- application/vnd.apple.mpegurl
- audio/mp4 · AAC Audio · 113851 kbps
- video/mp4 · 180p · 200p · 145281 kbps
- video/mp4 · 270p · 300p · 163767 kbps
- video/mp4 · 360p · 400p · 185365 kbps
- video/mp4 · 406p · 450p · 198596 kbps
- video/mp4 · 540p · 600p · 239894 kbps

#### Timed text tracks (delivery)

- **thumbnails:** `https://cdn.jwplayer.com/strips/igYBULhs-120.vtt`

#### Transcript

Long, long ago when we first started integrating third-party marketing tools into Lytx, we considered the providers themselves to be the first-class citizens. Look at all these providers! Wow! But as we grew and learned how our customers were using these integrations, what they wanted to do and how, we realized we needed to invert that original setup. Let's begin with authorizations, which are basically secret handshakes that Lytx and third-party tools use to recognize that it's safe to pass data to one of them. On the authorization dashboard seen here, you'll see a filterable, sortable, and searchable list of all of your authorizations. We now have labels in additions to descriptions to help you find the one you're looking for. We also include the health status of the authorization for those providers that we can run daily health checks against. Not all providers have support for that. The authorization summary gives you an at-a-glance view of the authorization's details and usage. From here, you can edit an authorization and also delete it, but you can only delete an authorization if it is not actively in use, which this one is, and that's why we get this error message. Below the activity charter, two tables, one for the associated jobs and another for associated experiences, since imported experiences still require authorization. While we were streamlining the flow for jobs and authorizations, we also extracted some of the technical jargon. We used to call our jobs works. The types of jobs were called workflows. Everything that existed before still exists, but we want both the language and your experience working in the Lytx application to feel more natural. Jobs are configured rule sets that determine the specific data passed between Lytx and a third-party tool. This can be data imported into Lytx, say, activity data, or exported from Lytx, such as audiences. On the jobs dashboard, you can see a list of all of your jobs, once again in a filterable, sortable, and searchable format. Note that the search picks up things from the name, the authorization, and the provider. Like authorizations, jobs now have names and descriptions to help you identify them quickly. The job summary contains at-a- glance details, including links to the authorization that it's using, and if it's supporting an imported experience, a link to that. The details tab also displays the configuration details for the job, and the logs tab will show you recent events most useful for jobs that are running continually. Thanks for watching this run-through of jobs and authorizations.

#### Subtitles (WebVTT)

```webvtt
WEBVTT

1
00:00:00.000 --> 00:00:04.600
Long, long ago when we first started integrating third-party marketing tools

2
00:00:04.600 --> 00:00:08.280
into Lytx, we considered the providers themselves to be the first-class

3
00:00:08.280 --> 00:00:13.480
citizens. Look at all these providers! Wow! But as we grew and learned how our

4
00:00:13.480 --> 00:00:17.440
customers were using these integrations, what they wanted to do and how, we

5
00:00:17.440 --> 00:00:24.800
realized we needed to invert that original setup. Let's begin with

6
00:00:24.800 --> 00:00:28.200
authorizations, which are basically secret handshakes that Lytx and

7
00:00:28.200 --> 00:00:32.360
third-party tools use to recognize that it's safe to pass data to one of them. On

8
00:00:32.360 --> 00:00:40.480
the authorization dashboard seen here, you'll see a filterable, sortable, and

9
00:00:40.480 --> 00:00:47.480
searchable list of all of your authorizations. We now have labels in

10
00:00:47.480 --> 00:00:53.440
additions to descriptions to help you find the one you're looking for. We also

11
00:00:53.440 --> 00:00:57.440
include the health status of the authorization for those providers that

12
00:00:57.440 --> 00:01:02.280
we can run daily health checks against. Not all providers have support for that.

13
00:01:02.280 --> 00:01:07.560
The authorization summary gives you an at-a-glance view of the authorization's

14
00:01:07.560 --> 00:01:13.880
details and usage. From here, you can edit an authorization and also delete it, but

15
00:01:13.880 --> 00:01:18.560
you can only delete an authorization if it is not actively in use, which this one

16
00:01:18.560 --> 00:01:23.480
is, and that's why we get this error message. Below the activity charter, two

17
00:01:23.480 --> 00:01:29.760
tables, one for the associated jobs and another for associated experiences, since

18
00:01:29.760 --> 00:01:35.920
imported experiences still require authorization. While we were streamlining

19
00:01:35.920 --> 00:01:40.160
the flow for jobs and authorizations, we also extracted some of the technical

20
00:01:40.160 --> 00:01:45.600
jargon. We used to call our jobs works. The types of jobs were called workflows.

21
00:01:45.600 --> 00:01:50.400
Everything that existed before still exists, but we want both the language and

22
00:01:50.400 --> 00:01:55.200
your experience working in the Lytx application to feel more natural. Jobs

23
00:01:55.200 --> 00:02:00.040
are configured rule sets that determine the specific data passed between Lytx

24
00:02:00.040 --> 00:02:05.520
and a third-party tool. This can be data imported into Lytx, say, activity data, or

25
00:02:05.520 --> 00:02:11.800
exported from Lytx, such as audiences. On the jobs dashboard, you can see a list of

26
00:02:11.800 --> 00:02:16.920
all of your jobs, once again in a filterable,

27
00:02:17.920 --> 00:02:29.000
sortable, and searchable format. Note that the search picks up things from the name,

28
00:02:29.000 --> 00:02:38.320
the authorization, and the provider. Like authorizations, jobs now have names and

29
00:02:38.320 --> 00:02:43.520
descriptions to help you identify them quickly. The job summary contains at-a-

30
00:02:43.600 --> 00:02:50.440
glance details, including links to the authorization that it's using, and if

31
00:02:50.440 --> 00:02:59.480
it's supporting an imported experience, a link to that. The details tab also

32
00:02:59.480 --> 00:03:04.120
displays the configuration details for the job, and the logs tab will show you

33
00:03:04.120 --> 00:03:09.560
recent events most useful for jobs that are running continually. Thanks for

34
00:03:09.560 --> 00:03:13.960
watching this run-through of jobs and authorizations.

```

```transcript
<!-- PLACEHOLDER: replace with real transcript before publish if cues were auto-derived from WebVTT -->
[00:00] Long, long ago when we first started integrating third-party marketing tools
[00:04] into Lytx, we considered the providers themselves to be the first-class
[00:08] citizens. Look at all these providers! Wow! But as we grew and learned how our
[00:13] customers were using these integrations, what they wanted to do and how, we
[00:17] realized we needed to invert that original setup. Let's begin with
[00:24] authorizations, which are basically secret handshakes that Lytx and
[00:28] third-party tools use to recognize that it's safe to pass data to one of them. On
[00:32] the authorization dashboard seen here, you'll see a filterable, sortable, and
[00:40] searchable list of all of your authorizations. We now have labels in
[00:47] additions to descriptions to help you find the one you're looking for. We also
[00:53] include the health status of the authorization for those providers that
[00:57] we can run daily health checks against. Not all providers have support for that.
[01:02] The authorization summary gives you an at-a-glance view of the authorization's
[01:07] details and usage. From here, you can edit an authorization and also delete it, but
[01:13] you can only delete an authorization if it is not actively in use, which this one
[01:18] is, and that's why we get this error message. Below the activity charter, two
[01:23] tables, one for the associated jobs and another for associated experiences, since
[01:29] imported experiences still require authorization. While we were streamlining
[01:35] the flow for jobs and authorizations, we also extracted some of the technical
[01:40] jargon. We used to call our jobs works. The types of jobs were called workflows.
[01:45] Everything that existed before still exists, but we want both the language and
[01:50] your experience working in the Lytx application to feel more natural. Jobs
[01:55] are configured rule sets that determine the specific data passed between Lytx
[02:00] and a third-party tool. This can be data imported into Lytx, say, activity data, or
[02:05] exported from Lytx, such as audiences. On the jobs dashboard, you can see a list of
[02:11] all of your jobs, once again in a filterable,
[02:17] sortable, and searchable format. Note that the search picks up things from the name,
[02:29] the authorization, and the provider. Like authorizations, jobs now have names and
[02:38] descriptions to help you identify them quickly. The job summary contains at-a-
[02:43] glance details, including links to the authorization that it's using, and if
[02:50] it's supporting an imported experience, a link to that. The details tab also
[02:59] displays the configuration details for the job, and the logs tab will show you
[03:04] recent events most useful for jobs that are running continually. Thanks for
[03:09] watching this run-through of jobs and authorizations.
```

#### Lesson text

Walk through setting up jobs and authorizations to connect Lytics to your marketing tools.[](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/onboarding-web-data#lytics-image-pixel)

## Introduction

### Integrations Overview

**Note:** On January 10, 2023, we upgraded our UI with a new, refreshed interface. All of the underlying functionality is the same, but you will notice that things look a little different from this Academy guide. The most notable change is that the navigation menu has moved from the top of the app to the left side. We appreciate your patience as we work on updating our Academy.

In this training guide, we will demonstrate how to set up jobs and authorizations step-by-step. Jobs are fundamental to moving your first-party data between Lytics and your marketing tools.

To start, watch the short overview video "Jobs and Authorizations" (3 mins).

### Objectives and Key Terms

#### Managing your data connections in Lytics

The Data tab in Lytics is central to setting up and managing the ongoing flow of data between Lytics and your other marketing tools. What used to be labeled as the "**Integrations**" tab now lives within 2 new sections under the **Data** tab: **Jobs** and **Authorizations.**

![Jobs - Authorizations Nav.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltd91798a2a2383c49/686b9df7b6f4b47e96082834/Jobs_-_Authorizations_Nav.png)

By the end of this training, you will have learned - How to create a job - How to create an authorization - What's new & improved?

## Key Terms

*   **Jobs:** responsible for moving your first-party data between Lytics and your marketing tools.
    *   3 main types: imports, exports, and enrichments (more on this in the next section).
    *   Formerly referred to as "works" and "workflows."
*   **Authorizations:** responsible for connecting your provider tools to Lytics.
*   **Provider**: third-party tool that you are connecting with Lytics.
*   **Sources:** send data from a provider to Lytics.
*   **Destinations:** send data from Lytics to a provider.

**Match the term to its definition**

Provider

How you move data between Lytics and other tools

Authorizations

How you allow other tools to connect to Lytics

Jobs

Third-party tool that you are connecting with Lytics

## Creating Jobs

### Job Types

#### Three Main Job Types

A provider tool may support one or more job types within Lytics. There are a variety of ways you can send data to and from Lytics, but they fall into 3 main categories:

*   **Import jobs**: Ingest data from a source tool into Lytics. 
    *   Result in data coming into Lytics to populate user profiles.

*   **Export jobs:** Send data from Lytics to a destination tool.
    *   Result in user profiles or audience membership being sent to your channel tools for campaign activation.

*   **Enrichment jobs:** Use a third-party service to enhance and enrich existing user profiles within Lytics. 
    *   Take existing profile data and expand on it by pulling in more information.

Many providers, such as **Mailchimp**, offer an import and export audiences option.

![mailchimp-job-type.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltf4d8083e08623956/686b9e903687732f96eec3b9/mailchimp-job-type.png)

**Amazon Web Services** is an example of a provider with many job types including several different tools under the AWS umbrella - Kinesis, Pinpoint, S3, etc.

![aws-job-types.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltc4e076777850d748/686b9e914b4fe965bf5c4f12/aws-job-types.png)

**Note:** For a few providers such as **Google Tag Manage**r, you will see **Other** listed as the job type.

For more info on the connection types and techniques available, see our [Integrated Marketing Tools](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/integrated-marketing-tools) documentation.

**True or false: All integrations support import and export jobs.**

A. True

B. False

Answer: False - Many providers support import and export jobs but not all. You can see which job types are supported in app and on Learn Lytics.

### Jobs Dashboard

The Jobs Dashboard gives an overview of your existing jobs and their status, and it's where you'll go to create a new job.

#### How to create a job

Check out the "Jobs Dashboard - Create A Job" video (3.5 mins) for a quick walk through of creating a job and an authorization together in a single flow.

Regardless of the job type, you'll follow these steps:

1.  Choose the provider.
2.  Choose the job type.
3.  Choose the authorization.
4.  Configure your job.

![jobs-wizard.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt51edc03d0040b5f7/686b9fb54b4fe98df75c4f23/jobs-wizard.png)For more info, see our [Jobs Dashboard](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/jobs/jobs-dashboard) documentation.

**Which of the following is NOT a step in the job creation flow?**

A. Choose Provider

B. Choose Authorization

C. Create Custom Job Type

D. Configure Job

Answer: C

### Job Summary

When you click on a job from the dashboard, you will be taken to the job's summary page. Here, you'll find important metadata about the job, such as the status, owner, creation date, and associated authorization, and the configuration details.

![lytics-job-summary-example.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltb0631065fb8f196c/686ba0fff1c7c2d1154ec68f/lytics-job-summary-example.png)

#### Edit the name and description of a job 

While creating a new job or editing an existing one, you can add a custom label and description. This is particularly important when you have numerous jobs of the same type running.

#### Job Activity Metrics

See how many user profiles were added, removed, or omitted on an hourly, daily, or weekly basis.

#### Job Status and Logs

Read up on job statuses and checking the logs in our [Job Summary](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/jobs/job-summary) documentation. 

**For an existing job, which of the following can you edit? Select all that apply.**

A. Name (label)

B. Job Type

C. Description

D. Provider

Answer: A, C - If you want to change the provider or job type, you should simply start a new job.

**The Activity chart on the Job Summary displays which of the following metrics?**

A. Potential reach, total reach, converted

B. Profiles added, profiles removed, and profiles omitted

C. Data fields, data source, data streams

D. Audience size, conversions, conversion rate

Answer: B

## Creating Authorizations

### Authorizations Dashboard

The Authorizations Dashboard gives an at-a-glance view of your existing authorizations and their usage.

![Lytics\_authorizations\_dashboard.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltf35e98764b7c09c4/686ba21828e271aa3aa3b482/Lytics_authorizations_dashboard.png)

#### Authorization health indicator

Lytics checks the status of your authorizations automatically on a daily basis. For an authorization to marked as "**healthy**," it must be **valid** and **active**. [Learn More](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/authorizations/authorization-summary#what-determines-authorization-health?) 

#### How to create an authorization

There are two ways you can create an authorization:

**Option 1**   
From the Authorizations Dashboard, click **Create New Authorization** and complete the steps:

1.  Choose the provider.
2.  Select the authorization method.
3.  Complete the configuration.

**Option 2** 

As shown in the video in the previous section, you can also create a new authorization **inline of the job creation flow.** This allows you to not interrupt your process of starting a job in order to make a new authorization first.

![auth-creation-in-job-wizard.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt6d58d1161a4d4755/686ba21802334227e192e5ee/auth-creation-in-job-wizard.png)

If you go this route, you'll see two steps added to the job flow that you'll complete before configuring your job.

#### Authorization Methods

Note that some integration providers only have one authorization method, but others offer multiple methods such as API keys, OAuth, etc. Certain methods enable different job types, so be sure to select the option that supports your use case.

For more info, see our [Authorizations Dashboard](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/authorizations/authorizations-dashboard) documentation.

**A certain authorization method may be required to enable one job type vs. another.**

A. True

B. False

Answer: True - Certain authorization methods enable different jobs such as real-time audience exports vs. bulk audience exports. If you are unsure which method to use, you can find more information in the integration documentation for your provider.

### Authorization Summary

When you click on an authorization from the dashboard, you will be taken to its summary page. Similar to the job summary, you'll find important metadata about the authorization such as the status, owner, creation date, etc.

![authorizations-summary-example.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt2d34fee5ed4eaa19/686ba3ef321163ea84005610/authorizations-summary-example.png)

#### Authorization activity metrics

See how many API requests have been sent on an hourly, daily, or weekly basis.

#### Edit the names and descriptions of authorizations

Add a custom label and description while creating an authorization or editing an existing one.

#### Change the authorization associated with a job

You can replace or update an authorization. There are many reasons why an authorization may need to be changed such as incorrect credentials, employee associated with an authorization leaves, etc. 

**NOTE:** You must select an authorization that supports the existing job configuration (e.g. authorizing for the specific account the job is targeting).

#### Delete authorizations directly via the UI

Keep your account current by deleting any invalid or unused authorizations.   
**NOTE:** You can only delete an authorization if there are no active jobs using it.

For more info, see our [Authorization Summary](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/authorizations/authorization-summary) documentation.

**What metric is shown in the authorization activity chart?** API Requests

#### Knowledge Check

**Users can change the authorization associated with an existing job.**

A. True

B. False

Answer: True - As long as the authorization method supports the job type selected, you can change the authorization or add a new one.

**You can delete an authorization that is associated with a job when the job status is \_\_\_\_\_? Select all that apply.**

A. Running

B. Sleeping

C. Failed

D. Paused

E. Completed

Answer: C, E

## Wrap-Up

### Summary and More Resource

**Nice job, learning about jobs!**

For a quick summary of what we covered in this guide:

*   Why we made this series of improvements to reduce friction points.
*   How to create a job in the new, streamlined flow.
*   How to create an authorization - on its own or as part of creating a job.

#### More Resources

The docs referenced throughout this guide:

*   [Integrated Marketing Tools](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/integrated-marketing-tools) 
*   [Jobs Dashboard](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/jobs/jobs-dashboard)
*   [Job Summary](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/jobs/job-summary)
*   [Authorizations Dashboard](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/authorizations/authorizations-dashboard)
*   [Authorization Summary](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/authorizations/authorization-summary)

For a recap of the series of improvements Lytics made to managing integrations:

*   [Streamlining Jobs Management](https://learn.lytics.com/product-updates/streamlining-jobs-management)
*   [Authorizations Get an Upgrade](https://learn.lytics.com/product-updates/authorizations-get-an-upgrade)
*   [Managing Integrations Made Easier](https://learn.lytics.com/product-updates/managing-integrations-made-easier)

#### Key takeaways

- Connect **Jobs and Authorizations** back to your stack configuration before moving to the next module.
- Capture one concrete artifact (screenshot, Postman call, or code snippet) that proves the step works in your environment.
- Re-read the delivery versus management boundary for anything you changed in the entry model.

### Lesson 04 — Lookalike Models

<!-- ai_metadata: {"lesson_id":"04","type":"video","duration_seconds":110,"video_url":"https://cdn.jwplayer.com/previews/oXPDbF1w","thumbnail_url":"https://cdn.jwplayer.com/v2/media/oXPDbF1w/poster.jpg?width=720","topics":["Lookalike","Models"]} -->

#### Video details

#### At a glance

- **Title:** Lookalike Models
- **Duration:** 1m 50s
- **Media link:** https://cdn.jwplayer.com/previews/oXPDbF1w
- **Publish date (unix):** 1751897371

#### Streaming renditions

- application/vnd.apple.mpegurl
- audio/mp4 · AAC Audio · 113573 kbps
- video/mp4 · 180p · 200p · 152187 kbps
- video/mp4 · 270p · 300p · 168241 kbps
- video/mp4 · 360p · 400p · 185580 kbps
- video/mp4 · 406p · 450p · 201175 kbps
- video/mp4 · 540p · 600p · 244015 kbps

#### Timed text tracks (delivery)

- **thumbnails:** `https://cdn.jwplayer.com/strips/oXPDbF1w-120.vtt`

#### Transcript

Welcome to the laboratory, your hub for getting hands-on with all things data science inside of Linux. Here we will focus on bringing you unprecedented access to industry-leading machine learning and AI tools, all of which are self-serviceable. It really is like having a team of data scientists at your disposal. What if you knew which users were going to buy a product or unsubscribe before an action even took place? Now with lookalike models you can do just that by comparing those who have reached a particular goal, the target audience, to those who have not, your source. We can then build a predictive audience of those who are very likely to reach that goal in the future. The UI comes packed with a variety of existing and new features as well as improvements upon existing ones. Auto-tuning for example acts as an easy button ensuring even those with little experience can successfully build great models to power predictive audiences. We've improved both visuals and navigation to ensure you fully understand how models are being used and what makes each one unique. Clear and actionable debugging information ensures there's no mystery and always a next step. Finally, accuracy and reach help you find the perfect balance to support your desired use case. Lookalike models apply to many marketing use cases at any stage of the funnel. Locate customers who are likely to churn, find those most likely to make a purchase, and nurture those that show clear signs of becoming high-value users. And so much more. Get started today by creating your first model.

#### Subtitles (WebVTT)

```webvtt
WEBVTT

1
00:00:00.000 --> 00:00:04.960
Welcome to the laboratory, your hub for getting hands-on with all things data

2
00:00:04.960 --> 00:00:09.820
science inside of Linux. Here we will focus on bringing you unprecedented

3
00:00:09.820 --> 00:00:14.400
access to industry-leading machine learning and AI tools, all of which are

4
00:00:14.400 --> 00:00:18.880
self-serviceable. It really is like having a team of data scientists at your

5
00:00:18.880 --> 00:00:24.760
disposal. What if you knew which users were going to buy a product or unsubscribe

6
00:00:24.760 --> 00:00:29.720
before an action even took place? Now with lookalike models you can do just

7
00:00:29.720 --> 00:00:35.200
that by comparing those who have reached a particular goal, the target audience, to

8
00:00:35.200 --> 00:00:40.040
those who have not, your source. We can then build a predictive audience of

9
00:00:40.040 --> 00:00:46.560
those who are very likely to reach that goal in the future. The UI comes packed

10
00:00:46.560 --> 00:00:50.840
with a variety of existing and new features as well as improvements upon

11
00:00:50.840 --> 00:00:56.960
existing ones. Auto-tuning for example acts as an easy button ensuring even

12
00:00:56.960 --> 00:01:01.280
those with little experience can successfully build great models to power

13
00:01:01.280 --> 00:01:07.840
predictive audiences. We've improved both visuals and navigation to ensure you

14
00:01:07.840 --> 00:01:12.680
fully understand how models are being used and what makes each one unique.

15
00:01:12.680 --> 00:01:17.440
Clear and actionable debugging information ensures there's no mystery

16
00:01:17.440 --> 00:01:24.400
and always a next step. Finally, accuracy and reach help you find the perfect

17
00:01:24.400 --> 00:01:31.040
balance to support your desired use case. Lookalike models apply to many

18
00:01:31.040 --> 00:01:36.200
marketing use cases at any stage of the funnel. Locate customers who are likely

19
00:01:36.200 --> 00:01:42.080
to churn, find those most likely to make a purchase, and nurture those that show

20
00:01:42.080 --> 00:01:48.120
clear signs of becoming high-value users. And so much more. Get started today by

21
00:01:48.120 --> 00:01:51.400
creating your first model.

```

```transcript
<!-- PLACEHOLDER: replace with real transcript before publish if cues were auto-derived from WebVTT -->
[00:00] Welcome to the laboratory, your hub for getting hands-on with all things data
[00:04] science inside of Linux. Here we will focus on bringing you unprecedented
[00:09] access to industry-leading machine learning and AI tools, all of which are
[00:14] self-serviceable. It really is like having a team of data scientists at your
[00:18] disposal. What if you knew which users were going to buy a product or unsubscribe
[00:24] before an action even took place? Now with lookalike models you can do just
[00:29] that by comparing those who have reached a particular goal, the target audience, to
[00:35] those who have not, your source. We can then build a predictive audience of
[00:40] those who are very likely to reach that goal in the future. The UI comes packed
[00:46] with a variety of existing and new features as well as improvements upon
[00:50] existing ones. Auto-tuning for example acts as an easy button ensuring even
[00:56] those with little experience can successfully build great models to power
[01:01] predictive audiences. We've improved both visuals and navigation to ensure you
[01:07] fully understand how models are being used and what makes each one unique.
[01:12] Clear and actionable debugging information ensures there's no mystery
[01:17] and always a next step. Finally, accuracy and reach help you find the perfect
[01:24] balance to support your desired use case. Lookalike models apply to many
[01:31] marketing use cases at any stage of the funnel. Locate customers who are likely
[01:36] to churn, find those most likely to make a purchase, and nurture those that show
[01:42] clear signs of becoming high-value users. And so much more. Get started today by
[01:48] creating your first model.
```

#### Lesson text

Gain an in-depth understanding of Lytics Lookalike Models and how to build Predictive Audiences that drive engagement and conversions.

## Introducing Lookalike Models

### Overview

Note: On January 10, 2023, we upgraded our UI with a new, refreshed interface. All of the underlying functionality is the same, but you will notice that things look a little different from this Academy guide. The most notable change is that the navigation menu has moved from the top of the app to the left side. We appreciate your patience as we work on updating our Academy.

Lytics makes it easy to build Lookalike Models and Predictive Audiences that drive engagement and conversions.

### What will I learn?

*   What are Lytics Lookalike Models? How can I use them?
*   Where do Lookalike Models live in Lytics?
*   How to start making models
*   How to understand your model performance and what to do next
*   How to create Predictive Audiences

### How are Lytics Lookalike Models different from other tools?

You may be familiar with "Lookalike Audiences" on Facebook or "Similar Audiences" on Google. Lytics Lookalike Models are different in a few key ways:

*   Based on your own first-party, cross-channel data (no walled garden)
*   You can build and validate your own custom models (no black box)
*   Quick and easy interface for marketers to make models (no data science team required)
*   Provide real-time predictions - updating user scores for dynamic and effective targeting

### Lytics Laboratory

Lytics provides a **data science workbench for your marketing teams** in the **Laboratory** section of the Lytics UI. This is where you'll go to build **Lookalike Models**.

![Click Laboratory > UI Models.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltb24a9f4465cfcc2c/686bd5514e325563d15687ae/Click_Laboratory_UI_Models.png)

Check out the short video "Lookalike Models" (1.5 min).

#### What are common use cases for Lookalike Models?

*   Optimize early stage funnel. Identify users least likely to progress to **cast an intelligently wide net**.
    *   Unknown users to known users
*   Optimize late stage funnel. Identify users most likely to convert to **optimize higher touch experiences**.
    *   Known Users to Purchasers
    *   One time Purchasers to Repeat Purchasers
*   Nurture users who are likely to become high lifetime value (LTV) customers
*   Reduce churn by identifying known users who are likely to churn

Learn more: [Lytics Laboratory documentation](https://learn.lytics.com/documentation/product/features/laboratory/introduction)  

**Which of the following are benefits of Lytics Lookalike Models? (select all that apply)**

A. Models continuously update allowing real-time predictions

B. You can import existing models from other tools into Lytics

C. Custom models can be built from scratch

D. Models are built using your own first-party data

Answer: A, C, D

### Core Concepts

Before learning how to build Lookalike Models in Lytics, it's important to understand the key terms that will be used throughout: 

*   **Model**: the output of customer data and a ML algorithm that can provide predictions on the data.
*   **Source Audience**: the group of users you want to reach with your marketing messages to get them to convert on a particular marketing goal. 
    *   For example: “unknown users”.
    *   Lytics calculates a score from 0 to 1 representing the user's likelihood to convert to the target audience.
*   **Target Audience**: the group of users that represents the desired outcome for users in the source audience (have converted in the past).
    *   For example: “users with email addresses”.
*   **Predictive Audience**: the output audience(s) that are built using the predictive score from a Lookalike Model.

For more definitions, see our [Glossary](https://learn.lytics.com/content/lytics-glossary).

#### Segmentation Strategies: Manual vs. Machine

Check out the [Lookalike Models Overview](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/overview#segmentation-strategies:-manual-vs.-machine) for an example of how Lookalike Models work compared to a manual segmentation strategy.

**Match the Lytics term to its definition.**

Model

Output of data and an algorithm that provides predictions

Target Audience

Users that have yet to convert on your marketing goal

Source Audience

Users that have converted in the past

Predictive Audience

Users available to target, using the predictive score output from a Lookalike Model

## Building Lookalike Models

### Lookalike Model Builder

The Lookalike Model Builder provides an interface for marketers to quickly build custom models in Lytics. Navigate to the Lookalike Models dashboard and find the **Create New Model** button at the top right to get started.

![click-create-new-model.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt49a93af1fddafc74/686be0df47548f406f569af6/click-create-new-model.png)

### Configuration Options

You can use a number of basic and advanced configuration options to build your model:

*   Basic options provide an intelligent starting point, making it quick and easy to get started. 
*   Advanced options give granular control to those who have more experience with model building.

For most use cases, building a model with the basic configuration parameters is sufficient. The **only required parameters** are:

*   Source audience
*   Target audience 

Selecting the right audiences is **very important for building a usable model**. We'll touch on this more in a later section.

For descriptions and examples of all configuration options, see [Model Builder documentation](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/model-builder). 

**Which of the following basic configuration parameters are required?**

A. Custom Model Name

B. Source Audience

C. Auto Tune

D. Target Audience

Answer: B, D

**Models built with Auto Tune can also include advanced configuration parameters.**

A. True

B. False

Answer: True

### Model Dashboard

The Lookalike Model dashboard provides visibility of the health, status, and usage of your Lookalike Models. It lists all models in your account allowing you to search, sort, and view the summary, configuration, and diagnostic information for each model created.

![lytics-lookalike-model-summary.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt9aae40f62f0e263e/686be6318f61ad7bdddc66d6/lytics-lookalike-model-summary.png)

Here's a quick rundown of the main sections:

#### Model health chart

Shows the total number of models for an account, and how many are considered **healthy** or unhealthy. A model is considered healthy if it can make accurate predictions.

#### Model status 

Displays how many of your models have been **activated**. An activated model is a model currently generating a score that is being written to user profiles, meaning the model's predictions can be used for audience segmentation and targeting.

#### Model usage 

Shows the percentage of all your audiences that are using activated models.

#### Model table

Allows you to search for any model in your account. Click any model to open its [summary](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/model-summary) for more detailed information about that model.

See the [documentation](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/model-dashboard) for more details. 

**Note: Lookalike Models Activation Limit** - You can only have 5 Lookalike Models active at a time, per the default account setting. If you're interested in a higher limit, please contact your Account Manager for more information.

**To use your model's predictions in audience segmentation and targeting, the model must be \_\_\_\_\_\_\_?**

A. Recently created

B. Activated

C. Top performing

Answer: B

### Model Summary

The Lookalike Model Summary surfaces valuable information and metrics about the performance and usage of your model.

#### Understanding Accuracy and Reach

At the top of each Model Summary page, you'll see two bars indicating the model's accuracy and reach scores.

![accuracy-reach.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt62f4dd47b1e7fbb7/686bec9c77a15576584c156d/accuracy-reach.png)

*   **Accuracy**: the precision of a model’s predictions
*   **Reach**: the relative size of a Lookalike Model’s addressable audience

As a general principle, you cannot optimize for both accuracy and reach in a Lookalike Model. There are inherent tradeoffs between the two.

*   Optimize for **reach** for targeting users in earlier stages of your funnel
    *   Reach more users, while being less precise
*   Optimize for **accuracy** for targeting users in later stages of your funnel
    *   Be more precise, but reach less users

![accuracy-reach-graph.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt71c82eed35bbc7b9/686bec9d167482a55e1b16f9/accuracy-reach-graph.png)

When balancing the trade off between accuracy and reach, consider the sum of accuracy and reach to determine a model’s fitness to be used: good models have an accuracy and reach sum around 10, excellent models will sum to more than 10, fair/poor models will sum to less than 10.

Refer to the [Accuracy vs. Reach](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/accuracy-vs-reach) documentation

**Interpreting your model's results**

In the rest of the Model Summary, you can see how the source and target audiences overlap, the important features that contribute to the model's predictions, what audiences are using the model, and any diagnostic and troubleshooting messages.

![model-summary.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt33a305dda3cad71c/686bec9d588d4685e5838125/model-summary.png)

**Model predictions**

Displays the size of your source and target audiences and charts the predictions for those audiences. The shape of the graph is most important, which indicates the amount of overlap between your source and target audiences.

**Model features importance**

Indicates which information the model is using to make predictions. More specifically, the chart lists the relative importance of features among users likely to convert from the source to the target audience.

**Model usage**

Displays a list of audiences that are utilizing this model for targeting and a button to create a new Predictive Audience with this model.

**Diagnostics & troubleshooting**

Provides messages pertaining to the performance and status of your Lookalike Model. These messages are categorized as warnings, errors, or information, and may include suggestions on how to improve your models.

Refer the [Model Summary](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/model-summary) documentation

**Check your understanding of the tradeoffs between accuracy and reach.**

High reach

Better for earlier stages of your funnel

Low accuracy and low reach

Better for later stages of your funnel

High accuracy

Not a good model, shouldn't be used for predictions[](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/accuracy-vs-reach)

## Targeting Predictive Audiences

### Creating Predictive Audiences

Marketing campaigns run with **Predictive Audiences** from a healthy **Lookalike Model** are very likely to result in higher conversion rates. Even better, Lytics makes it super easy to create Predictive Audiences with the click of a button.

Really, it's that easy!

**Ok, show me how to create a Predictive Audience!**

Once a Lookalike Model is built and users are scored, follow these steps to make a Predictive Audience.

1.  Find the model of interest and go to its **Summary** page.
2.  Scroll to the **Model Usage** section.
3.  Click **Create Predictive Audience**.
4.  Now in the Audience Builder, you'll see the model predictions pre-populated as a user field named \`segment\_prediction\`.
5.  You can adjust the prediction decision threshold or add rules to further refine the audience.

![model\_usage.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt41dfebed02ce28ed/686bf67219cab28ee2cecd4b/model_usage.png)

Refer to the [Create Predictive Audiences](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/model-builder#create-predictive-audiences) documentation.

### Selecting the right Source and Target

Building effective Lookalike Models that power your Predictive Audiences is an iterative process. 

Some experimentation is to be expected before you find the right audiences and configuration options for your use case, which ties into the idea behind the "Lytics Laboratory." 

#### Selecting the Right Source and Target Audience

One of the most important factors for Lookalike Model performance is your selection of the source and target audience. 

![Lytics\_Lookalike\_Models\_audience\_selection\_diagram.png](https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt69987467fd5b4f68/686c0d29f1c7c242864ecc5f/Lytics_Lookalike_Models_audience_selection_diagram.png)

Check out the documentation linked below for examples of how to select the right audiences for your use case.

Refer to the [Selecting the right audiences](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/interpreting-model-messages#selecting-the-right-source-and-target-audience) documentation.  

**Match the audience type to the example use case.**

Adjacent Audience

unknown audience → audience with email

Divergent Audience

unknown audience → audience with newsletter signup and single purchase

Overlapping Audience

unknown audience → audience with no purchases

### Iterating your Predictive Audiences

Most Predictive Audiences are built by identifying users in the source audience who have a high model score, indicating they are likely to convert to the target audience.  This "high score" is called the Decision Threshold.

#### Adjusting the Decision Threshold

The decision threshold is a score from 0-1

*   0= very unlikely to convert
*   1= very likely convert 
*   In most cases, the decision threshold is set to 0.5

To adjust the **reach** of the audience you are building, you might consider using a different decision threshold.

*   Lower decision threshold - **reach more users** in the source audience.
*   Higher decision threshold - **be more accurate but reach less users** in the source audience.

Refer to the [Decision Threshold](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/interpreting-model-messages#adjusting-the-decision-threshold) documentation

**How do I pick the right threshold?** You may see diagnostic messages ﻿suggesting you create audiences using a different decision threshold. But keep in mind any relevant domain knowledge (which the model doesn't have) when choosing the decision threshold.

**If you want to expand the reach of your Predictive Audience, how should you adjust the decision threshold?**

A. Raise the decision threshold

B. Keep it the same but pick a larger source audience

C. Lower the decision threshold

Answer: C

#### My model is unhealthy. What can I do?

See the [Improving Unhealthy Models](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/interpreting-model-messages#improving-unhealthy-models) section for examples of why your model might not be performing well and suggested next steps.

**What is the most common reason a model is considered unhealthy?**

A. Auto Tune wasn't selected

B. Source and target audiences were divergent or overlapping

C. Decision threshold was too high

D. Not enough signal in the underlying data

Answer: B

**If you are trying to predict which users are likely to make a purchase, which of the following data sources are required to build an accurate Lookalike Model?**

A. Email data

B. Demographic data

C. Subscription data

D. Purchase data

Answer: D

## Next Steps

### More Resources

To continue learning, we recommend you check out the following resources:

*   [Lookalike Models documentation](https://learn.lytics.com/documentation/product/features/laboratory/lookalike-models/overview)
*   [Leveraging Lookalike Models & Predictive Audiences](https://learn.lytics.com/use-cases/leverage-lookalike-models-and-predictive-audiences)
    *   Learn about 6 different Use Cases across the customer lifecycle

Just as you iterate Lookalike Models, the Lytics Team wants to keep iterating our training guides.

#### Key takeaways

- Connect **Lookalike Models** back to your stack configuration before moving to the next module.
- Capture one concrete artifact (screenshot, Postman call, or code snippet) that proves the step works in your environment.
- Re-read the delivery versus management boundary for anything you changed in the entry model.

### Lesson 05 — Lytics Query Language

<!-- ai_metadata: {"lesson_id":"05","type":"video","duration_seconds":893,"video_url":"https://cdn.jwplayer.com/previews/g3xlbE7o","thumbnail_url":"https://cdn.jwplayer.com/v2/media/g3xlbE7o/poster.jpg?width=720","topics":["Lytics","Query","Language"]} -->

#### Video details

#### At a glance

- **Title:** Lytics Query Language - Learn LQL Basics
- **Duration:** 14m 53s
- **Media link:** https://cdn.jwplayer.com/previews/g3xlbE7o
- **Publish date (unix):** 1751912463

#### Streaming renditions

- application/vnd.apple.mpegurl
- audio/mp4 · AAC Audio · 113475 kbps
- video/mp4 · 180p · 200p · 154868 kbps
- video/mp4 · 270p · 300p · 189302 kbps
- video/mp4 · 360p · 400p · 217965 kbps
- video/mp4 · 406p · 450p · 233756 kbps
- video/mp4 · 540p · 600p · 296695 kbps

#### Timed text tracks (delivery)

- **thumbnails:** `https://cdn.jwplayer.com/strips/g3xlbE7o-120.vtt`

#### Transcript

Hello and welcome to the Lytx training session LQL 101. On today's agenda we will cover what LQL is, where you can find it in the UI. I'll show you the default LQL that comes out of the box in any account. We'll cover some basic syntax explanation so that people can read LQL at a beginner level. I'll also point out where our public documentation lives for anyone to view. In addition we'll cover why it's important to go back and read our public documentation to learn more about LQL. There are many ways to transform an incoming field and how you choose to transform that field may impact your experience in the audience builder. So what is LQL? This is the long answer which I won't read aloud but it's here and also available on our public documentation. The short answer for what LQL is is that LQL transforms our incoming data. LQL enriches user profiles by structuring incoming data with unique identifiers. The result of this is the ability to build educated audiences in Lytx. So in general this is the high-level process of how things are happening. An event will come in such as page click or logs in and it will go to our cloud storage. It'll also go through our LQL where that data is transformed in meaningful ways to enrich user profiles. So here's a basic skeleton outline of what a LQL file looks like. At the very top we have we typically put in the file name and some important descriptions about the file. An important note is that this will appear in the Lytx UI. Next we have our select statement which is telling us to select incoming fields which you'll find on the far left side as a Lytx slug which is just the normalized version of the incoming field names for Lytx to process those fields. Next we have the option to use conditional statements and then we have short descriptions. This is the user-friendly name that you'll find in the Lytx UI. So instead of having first name as camel case, first name as snake case, we'll just have in title case first name. Next is the long description. This is completely optional and it's only found in a very specific place in the UI. At the end of the LQL files we have our unique identifiers, our by fields. We separate these from the other fields just for easy identification. Essentially all the fields above the by fields are able to be stored onto the user profile via the unique identifiers. So first names will come in with email UID or account ID and any other field from the above will have to come in with either email UID or account ID. At the very bottom of the LQL file we have the statements from, into, by, and alias. From denotes which stream name that the incoming data will come from. So if it's click stream data or data from the web layer etc that'll be denoted here. Into user. User is just the table that we're storing the fields on. It's the most popular table. We also have a content table but we won't get into that today. By is the by fields that we're using to store those incoming fields on. Again email UID, account ID. So the incoming fields will be stored onto the user profile again via email UID or account ID. The final statement is alias and this is just the unique file name. Unique is important here. Every LQL file name must be unique. So this is our demo account. If you go to the data tab and then the sub tab queries, you'll find all the LQL for the account. Starting from Linux content all the way user web path fora. These eight files right here come out of the box. Next to the LQL file names we'll see which table. User, content, etc. Those fields are going to be stored upon. On the far right hand side this is where we see the titles of the LQL files. So this top section here where I mentioned we typically store the file name and or some descriptions and where it appears in the Linux UI. That appears right here. So let's click into an LQL file. We'll start with user web default. This is user web default data out of the box. So we'll store things like the anonymous ID, first name, last name, city, state, country, company, city, URLs, email list, email sent, email clicks, etc. It's coming to the default stream into the user table via these by fields. UIDs, email, or phone number. And the name of the LQL file is user web default. These are the names you see in the audience builder and elsewhere in the UI. So to timestamp, sometimes stamp field, is going to be stored as time of last visit. Something I didn't mention earlier is that we have data types as well. We call them kinds, kind date, kind string, int, maps, many different data types we can use to transform your data from some subscription data, from some subscription status, to taking that subscription and mapping it to a subscription status. Now we'll take a look into some public documentation. If I search for Lix, go to our website, knowledge center, product documentation. I can search by LQL in here. And here's our LQL documentation for anyone to view. Scrolling through, we have some LQL examples, standard syntax descriptions, select from, into, where, by, alias, in case you were to forget. Here are all the functions available to transform your data from aggregate functions to logical functions, string functions, hash and encoding functions, and much much more. If you continue to scroll down, we'll eventually reach the data types we have, or kinds. Anything from an int, date, arrays, maps, many ways to take an incoming field to then transform it to enrich profiles. So now we'll take a look into the UI, and I'll show you how choosing specific functions or data types will change what you'll experience in the audience builder. So we're back at the queries tab, and before I go to the audience builder, we'll just click on this query here. So this is considered a custom LQL file based on email events. I'll open the new tab to the audience builder. Select create a new audience. In the custom rules tab here, we'll find all the fields from the LQL. Here I can choose a source. Custom email is what we just took a look at, and here are all the short descriptions from that LQL file. These. Going back to the LQL file, let's take a look at type date. So here we get this histogram where we can see on October 31st, 655 last email open dates came in, 587 on November 30th, and so on. We also have options to pick a relative date or a specific date. All of this is possible because the LQL is denoted type date. Now let's take a look at what happens with a string value in the audience builder, such as email address. So now we have the options to take a string and say I want to find a string that's exactly equal to something, contains some sort of substring, contains one string or another. If that string exists, equals one of, and there's also this be like button as well. So if that field simply exists, there's 10,000 users with it. In the case of email, we could say contains some email domain. In this case, approximately 3,000 users have an at demo in their email address. Now let's try to find a couple more different data types. I came back to the default web data LQL to find another data type for us to take a look at in the audience builder. Let's take a look at this one. We're mapping an email list to email sent. Email list is type string. Email sent, number of emails sent, is type int. I can find this field in the audience builder by searching for the short description. Now we have a different view. These are the different strings appearing in this map. Let's go to tech news. We can say we want tech news to be greater than two times, and this is our audience. We could say be less than 10, and now this is our audience. So we were able to select a string in that previous window and then select by the integers of how many email sents is stored on the user profile. There are many other ways to transform the audience builder based on the data type and functions that we're using along those incoming field names. Again, there is a lot to learn about LQL and this is meant to be an intro to just get you started. I highly recommend coming back to this page and reviewing all the different functions and data types and start thinking about different ways you can transform your incoming fields and enrich user profiles to meet your use cases. Thank you for watching. See you next time. Goodbye.

#### Subtitles (WebVTT)

```webvtt
WEBVTT

1
00:00:00.000 --> 00:00:08.400
Hello and welcome to the Lytx training session LQL 101. On today's agenda we

2
00:00:08.400 --> 00:00:14.520
will cover what LQL is, where you can find it in the UI. I'll show you the

3
00:00:14.520 --> 00:00:19.880
default LQL that comes out of the box in any account. We'll cover some basic

4
00:00:19.880 --> 00:00:25.680
syntax explanation so that people can read LQL at a

5
00:00:25.680 --> 00:00:31.000
beginner level. I'll also point out where our public documentation lives for

6
00:00:31.000 --> 00:00:38.240
anyone to view. In addition we'll cover why it's important to go back and read

7
00:00:38.240 --> 00:00:43.000
our public documentation to learn more about LQL. There are many ways to

8
00:00:43.000 --> 00:00:48.160
transform an incoming field and how you choose to transform that field may

9
00:00:48.160 --> 00:00:59.160
impact your experience in the audience builder. So what is LQL? This is the long

10
00:00:59.160 --> 00:01:04.560
answer which I won't read aloud but it's here and also available on our public

11
00:01:04.560 --> 00:01:14.080
documentation. The short answer for what LQL is is that LQL transforms our

12
00:01:14.080 --> 00:01:20.800
incoming data. LQL enriches user profiles by structuring incoming data

13
00:01:20.800 --> 00:01:27.720
with unique identifiers. The result of this is the ability to build educated

14
00:01:27.720 --> 00:01:37.040
audiences in Lytx. So in general this is the high-level process of how things are

15
00:01:37.040 --> 00:01:44.920
happening. An event will come in such as page click or logs in and it will go to

16
00:01:44.920 --> 00:01:52.600
our cloud storage. It'll also go through our LQL where that data is transformed

17
00:01:52.600 --> 00:02:02.960
in meaningful ways to enrich user profiles. So here's a basic skeleton

18
00:02:02.960 --> 00:02:11.920
outline of what a LQL file looks like. At the very top we have we typically put

19
00:02:11.920 --> 00:02:16.800
in the file name and some important descriptions about the file. An important

20
00:02:16.800 --> 00:02:23.720
note is that this will appear in the Lytx UI. Next we have our select

21
00:02:23.720 --> 00:02:28.280
statement which is telling us to select incoming fields which you'll find on the

22
00:02:28.280 --> 00:02:35.960
far left side as a Lytx slug which is just the normalized version of the

23
00:02:35.960 --> 00:02:45.080
incoming field names for Lytx to process those fields. Next we have the option to

24
00:02:45.080 --> 00:02:51.320
use conditional statements and then we have short descriptions. This is the

25
00:02:51.320 --> 00:02:56.600
user-friendly name that you'll find in the Lytx UI. So instead of having first

26
00:02:56.720 --> 00:03:03.760
name as camel case, first name as snake case, we'll just have in title case first

27
00:03:03.760 --> 00:03:10.920
name. Next is the long description. This is completely optional and it's only

28
00:03:10.920 --> 00:03:17.160
found in a very specific place in the UI. At the end of the LQL files we have our

29
00:03:17.160 --> 00:03:22.160
unique identifiers, our by fields. We separate these from the other fields

30
00:03:22.160 --> 00:03:26.880
just for easy identification. Essentially all the fields above the

31
00:03:26.880 --> 00:03:32.640
by fields are able to be stored onto the user profile via the unique identifiers.

32
00:03:32.640 --> 00:03:39.440
So first names will come in with email UID or account ID and any other field

33
00:03:39.440 --> 00:03:48.320
from the above will have to come in with either email UID or account ID. At the

34
00:03:48.320 --> 00:03:55.920
very bottom of the LQL file we have the statements from, into, by, and alias. From

35
00:03:55.920 --> 00:04:03.040
denotes which stream name that the incoming data will come from. So if it's

36
00:04:03.040 --> 00:04:13.160
click stream data or data from the web layer etc that'll be denoted here. Into

37
00:04:13.280 --> 00:04:19.600
user. User is just the table that we're storing the fields on. It's the most

38
00:04:19.600 --> 00:04:24.680
popular table. We also have a content table but we won't get into that today.

39
00:04:24.680 --> 00:04:33.120
By is the by fields that we're using to store those incoming fields on. Again

40
00:04:33.120 --> 00:04:39.640
email UID, account ID. So the incoming fields will be stored onto the user

41
00:04:39.760 --> 00:04:47.960
profile again via email UID or account ID. The final statement is alias and this

42
00:04:47.960 --> 00:04:53.720
is just the unique file name. Unique is important here. Every LQL file name must

43
00:04:53.720 --> 00:05:05.880
be unique. So this is our demo account. If you go to the data tab and then the

44
00:05:05.880 --> 00:05:15.400
sub tab queries, you'll find all the LQL for the account. Starting from Linux

45
00:05:15.400 --> 00:05:21.480
content all the way user web path fora. These eight files right here come out of

46
00:05:21.480 --> 00:05:30.520
the box. Next to the LQL file names we'll see which table. User, content, etc. Those

47
00:05:30.560 --> 00:05:36.840
fields are going to be stored upon. On the far right hand side this is where we

48
00:05:36.840 --> 00:05:45.000
see the titles of the LQL files. So this top section here where I mentioned we

49
00:05:45.000 --> 00:05:50.280
typically store the file name and or some descriptions and where it appears

50
00:05:50.280 --> 00:05:59.240
in the Linux UI. That appears right here. So let's click into an LQL file. We'll

51
00:05:59.240 --> 00:06:07.120
start with user web default. This is user web default data out of the box. So

52
00:06:07.120 --> 00:06:16.280
we'll store things like the anonymous ID, first name, last name, city, state, country,

53
00:06:16.280 --> 00:06:28.960
company, city, URLs, email list, email sent, email clicks, etc. It's coming to the

54
00:06:28.960 --> 00:06:37.600
default stream into the user table via these by fields. UIDs, email, or phone

55
00:06:37.600 --> 00:06:48.560
number. And the name of the LQL file is user web default. These are the names

56
00:06:48.560 --> 00:06:54.640
you see in the audience builder and elsewhere in the UI. So to timestamp,

57
00:06:54.640 --> 00:07:00.320
sometimes stamp field, is going to be stored as time of last visit. Something I

58
00:07:00.320 --> 00:07:06.000
didn't mention earlier is that we have data types as well. We call them kinds,

59
00:07:06.000 --> 00:07:16.400
kind date, kind string, int, maps, many different data types we can use to

60
00:07:16.400 --> 00:07:24.040
transform your data from some subscription data, from some subscription

61
00:07:24.040 --> 00:07:34.800
status, to taking that subscription and mapping it to a subscription status. Now

62
00:07:34.800 --> 00:07:43.920
we'll take a look into some public documentation. If I search for Lix, go to

63
00:07:43.920 --> 00:07:55.720
our website, knowledge center, product documentation. I can search by LQL in

64
00:07:55.720 --> 00:08:11.240
here. And here's our LQL documentation for anyone to view. Scrolling through, we

65
00:08:11.240 --> 00:08:20.560
have some LQL examples, standard syntax descriptions, select from, into, where, by,

66
00:08:20.560 --> 00:08:27.880
alias, in case you were to forget. Here are all the functions available to

67
00:08:27.880 --> 00:08:34.120
transform your data from aggregate functions to logical functions, string

68
00:08:34.200 --> 00:08:44.680
functions, hash and encoding functions, and much much more. If you continue to

69
00:08:44.680 --> 00:08:52.640
scroll down, we'll eventually reach the data types we have, or kinds. Anything

70
00:08:52.640 --> 00:09:02.080
from an int, date, arrays, maps, many ways to take an incoming field to then

71
00:09:02.080 --> 00:09:09.360
transform it to enrich profiles. So now we'll take a look into the UI, and I'll

72
00:09:09.360 --> 00:09:17.720
show you how choosing specific functions or data types will change what you'll

73
00:09:17.720 --> 00:09:24.400
experience in the audience builder. So we're back at the queries tab, and before

74
00:09:24.400 --> 00:09:29.840
I go to the audience builder, we'll just click on this query here. So this is

75
00:09:29.840 --> 00:09:39.200
considered a custom LQL file based on email events. I'll open the

76
00:09:39.200 --> 00:09:43.040
new tab to the audience builder.

77
00:09:50.680 --> 00:09:59.160
Select create a new audience. In the custom rules tab here, we'll find all the

78
00:09:59.160 --> 00:10:08.320
fields from the LQL. Here I can choose a source. Custom email is what we just took

79
00:10:08.320 --> 00:10:15.960
a look at, and here are all the short descriptions from that LQL file.

80
00:10:16.200 --> 00:10:19.200
These.

81
00:10:20.040 --> 00:10:32.360
Going back to the LQL file, let's take a look at type date.

82
00:10:36.680 --> 00:10:45.280
So here we get this histogram where we can see on October 31st, 655 last email

83
00:10:45.320 --> 00:10:57.440
open dates came in, 587 on November 30th, and so on. We also have options to pick a

84
00:10:57.440 --> 00:11:01.480
relative date or a specific date.

85
00:11:05.920 --> 00:11:14.160
All of this is possible because the LQL is denoted type date. Now let's take a

86
00:11:14.160 --> 00:11:24.840
look at what happens with a string value in the audience builder, such as

87
00:11:24.840 --> 00:11:33.800
email address. So now we have the options to take a string and say I want to find

88
00:11:33.800 --> 00:11:38.320
a string that's exactly equal to something, contains some sort of

89
00:11:38.320 --> 00:11:46.520
substring, contains one string or another. If that string exists, equals one

90
00:11:46.520 --> 00:11:51.680
of, and there's also this be like button as well.

91
00:11:55.920 --> 00:12:05.520
So if that field simply exists, there's 10,000 users with it. In the case of

92
00:12:05.520 --> 00:12:15.160
email, we could say contains some email domain. In this case, approximately 3,000

93
00:12:15.160 --> 00:12:25.880
users have an at demo in their email address. Now let's try to find a couple

94
00:12:25.880 --> 00:12:32.280
more different data types. I came back to the default web data LQL to find

95
00:12:32.280 --> 00:12:39.160
another data type for us to take a look at in the audience builder. Let's take a

96
00:12:39.160 --> 00:12:49.440
look at this one. We're mapping an email list to email sent. Email list is type

97
00:12:49.440 --> 00:12:56.760
string. Email sent, number of emails sent, is type int.

98
00:13:02.280 --> 00:13:13.880
I can find this field in the audience builder by searching for the short

99
00:13:13.880 --> 00:13:16.480
description.

100
00:13:23.480 --> 00:13:29.080
Now we have a different view. These are the different strings appearing in this

101
00:13:29.080 --> 00:13:41.360
map. Let's go to tech news. We can say we want tech news to be greater than two

102
00:13:41.360 --> 00:13:51.640
times, and this is our audience. We could say be less than 10, and now this is our

103
00:13:51.640 --> 00:14:01.200
audience. So we were able to select a string in that previous window and then

104
00:14:01.200 --> 00:14:09.280
select by the integers of how many email sents is stored on the user profile. There

105
00:14:09.280 --> 00:14:13.960
are many other ways to transform the audience builder based on the data type

106
00:14:14.080 --> 00:14:23.320
and functions that we're using along those incoming field names. Again, there

107
00:14:23.320 --> 00:14:28.240
is a lot to learn about LQL and this is meant to be an intro to just get you

108
00:14:28.240 --> 00:14:33.480
started. I highly recommend coming back to this page and reviewing all the

109
00:14:33.480 --> 00:14:38.400
different functions and data types and start thinking about different ways you

110
00:14:38.400 --> 00:14:46.520
can transform your incoming fields and enrich user profiles to meet your use

111
00:14:46.520 --> 00:14:54.400
cases. Thank you for watching. See you next time. Goodbye.

```

```transcript
<!-- PLACEHOLDER: replace with real transcript before publish if cues were auto-derived from WebVTT -->
[00:00] Hello and welcome to the Lytx training session LQL 101. On today's agenda we
[00:08] will cover what LQL is, where you can find it in the UI. I'll show you the
[00:14] default LQL that comes out of the box in any account. We'll cover some basic
[00:19] syntax explanation so that people can read LQL at a
[00:25] beginner level. I'll also point out where our public documentation lives for
[00:31] anyone to view. In addition we'll cover why it's important to go back and read
[00:38] our public documentation to learn more about LQL. There are many ways to
[00:43] transform an incoming field and how you choose to transform that field may
[00:48] impact your experience in the audience builder. So what is LQL? This is the long
[00:59] answer which I won't read aloud but it's here and also available on our public
[01:04] documentation. The short answer for what LQL is is that LQL transforms our
[01:14] incoming data. LQL enriches user profiles by structuring incoming data
[01:20] with unique identifiers. The result of this is the ability to build educated
[01:27] audiences in Lytx. So in general this is the high-level process of how things are
[01:37] happening. An event will come in such as page click or logs in and it will go to
[01:44] our cloud storage. It'll also go through our LQL where that data is transformed
[01:52] in meaningful ways to enrich user profiles. So here's a basic skeleton
[02:02] outline of what a LQL file looks like. At the very top we have we typically put
[02:11] in the file name and some important descriptions about the file. An important
[02:16] note is that this will appear in the Lytx UI. Next we have our select
[02:23] statement which is telling us to select incoming fields which you'll find on the
[02:28] far left side as a Lytx slug which is just the normalized version of the
[02:35] incoming field names for Lytx to process those fields. Next we have the option to
[02:45] use conditional statements and then we have short descriptions. This is the
[02:51] user-friendly name that you'll find in the Lytx UI. So instead of having first
[02:56] name as camel case, first name as snake case, we'll just have in title case first
[03:03] name. Next is the long description. This is completely optional and it's only
[03:10] found in a very specific place in the UI. At the end of the LQL files we have our
[03:17] unique identifiers, our by fields. We separate these from the other fields
[03:22] just for easy identification. Essentially all the fields above the
[03:26] by fields are able to be stored onto the user profile via the unique identifiers.
[03:32] So first names will come in with email UID or account ID and any other field
[03:39] from the above will have to come in with either email UID or account ID. At the
[03:48] very bottom of the LQL file we have the statements from, into, by, and alias. From
[03:55] denotes which stream name that the incoming data will come from. So if it's
[04:03] click stream data or data from the web layer etc that'll be denoted here. Into
[04:13] user. User is just the table that we're storing the fields on. It's the most
[04:19] popular table. We also have a content table but we won't get into that today.
[04:24] By is the by fields that we're using to store those incoming fields on. Again
[04:33] email UID, account ID. So the incoming fields will be stored onto the user
[04:39] profile again via email UID or account ID. The final statement is alias and this
[04:47] is just the unique file name. Unique is important here. Every LQL file name must
[04:53] be unique. So this is our demo account. If you go to the data tab and then the
[05:05] sub tab queries, you'll find all the LQL for the account. Starting from Linux
[05:15] content all the way user web path fora. These eight files right here come out of
[05:21] the box. Next to the LQL file names we'll see which table. User, content, etc. Those
[05:30] fields are going to be stored upon. On the far right hand side this is where we
[05:36] see the titles of the LQL files. So this top section here where I mentioned we
[05:45] typically store the file name and or some descriptions and where it appears
[05:50] in the Linux UI. That appears right here. So let's click into an LQL file. We'll
[05:59] start with user web default. This is user web default data out of the box. So
[06:07] we'll store things like the anonymous ID, first name, last name, city, state, country,
[06:16] company, city, URLs, email list, email sent, email clicks, etc. It's coming to the
[06:28] default stream into the user table via these by fields. UIDs, email, or phone
[06:37] number. And the name of the LQL file is user web default. These are the names
[06:48] you see in the audience builder and elsewhere in the UI. So to timestamp,
[06:54] sometimes stamp field, is going to be stored as time of last visit. Something I
[07:00] didn't mention earlier is that we have data types as well. We call them kinds,
[07:06] kind date, kind string, int, maps, many different data types we can use to
[07:16] transform your data from some subscription data, from some subscription
```

#### Lesson text

Developer training on basic LQL syntax and how Lytics transforms incoming data into actionable user profiles.

## Intro to LQL

### Learn LQL basics

**Note:** On January 10, 2023, we upgraded our UI with a new, refreshed interface. All of the underlying functionality is the same, but you will notice that things look a little different from this Academy guide. The most notable change is that the navigation menu has moved from the top of the app to the left side. We appreciate your patience as we work on updating our Academy.

### What will I learn?

*   What is LQL? 
*   Where can I view LQL files in the Lytics UI?
*   What is the basic syntax?
*   What are the different ways to transform incoming data fields?

Watch the "Lytics Query Language - Learn LQL Basics" training video below (15 mins) for an overview of **Lytics Query Language** (LQL). We'll cover the basics of how LQL transforms data, the difference between default and custom LQL, and how key identifiers merge user profiles.

**Keep in Mind** - The end result of LQL is the ability to build educated user profiles and audiences in Lytics.

### Knowledge Check

Before you dive into writing your own LQL files, check your understanding of the key concepts covered.

**Copies of raw events are stored prior to LQL processing.**

A. True

B. False

Answer: A

**Which name on the LQL file is displayed and searchable in the Lytics Audience Builder?**

A. SELECT Field

B. Short Description

C. BY Field

D. Long Description

Answer: B

**Each LQL file must have a unique name set as the alias.**

A. True

B. False

Answer: A

**How are data types (i.e. string, integer, map, etc.) referenced in LQL files?**

A. SORT

B. TYPE

C. KIND

Answer: C

**KINDs will impact the data visualizations in the Audience Builder.**

A. True

B. False

Answer: A

**How many out-of-the-box LQL files are there?**

A. 4

B. 3

C. 8

D. 0

Answer: C

### More Resources

### Documentation

*   [Lytics Query Language](https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/lytics-query-language) - more detailed explanation and syntax examples
*   [LQL & Data Import Basics](https://learn.lytics.com/documentation/developer/academy/lql-and-data-import-basics) - introductory API documentation

#### Key takeaways

- Connect **Lytics Query Language** back to your stack configuration before moving to the next module.
- Capture one concrete artifact (screenshot, Postman call, or code snippet) that proves the step works in your environment.
- Re-read the delivery versus management boundary for anything you changed in the entry model.

## Resources & references

| Page | Companion Markdown |
| --- | --- |
| /courses/beyond-the-basics/lytics-data-flow | /academy/md/courses/beyond-the-basics/lytics-data-flow.md |
| /courses/beyond-the-basics/lytics-javascript-tag | /academy/md/courses/beyond-the-basics/lytics-javascript-tag.md |
| /courses/beyond-the-basics/jobs-and-authorizations | /academy/md/courses/beyond-the-basics/jobs-and-authorizations.md |
| /courses/beyond-the-basics/lookalike-models | /academy/md/courses/beyond-the-basics/lookalike-models.md |
| /courses/beyond-the-basics/lytics-query-language | /academy/md/courses/beyond-the-basics/lytics-query-language.md |

## Supplement for indexing

### Content summary

Dive into more advanced topics to optimize your use of Lytics and unlock new activations. Dive into more advanced topics to optimize your use of Lytics and unlock new activations.

### Retrieval tags

- lytics
- beyond-the-basics
- Data
- Flow
- JavaScript
- Tag
- Jobs
- and
- Authorizations
- Lookalike
- Models
- Query
- Language
- beyond-the-basics course

### Indexing notes

Chunk at each "### Lesson NN — Title" heading; copy lesson_id and topics from the preceding HTML comment into chunk metadata for RAG filters.
Course slug: beyond-the-basics. Union of lesson topic tokens: Lytics, Data, Flow, JavaScript, Tag, Jobs, and, Authorizations, Lookalike, Models, Query, Language.
Do not embed or retrieve LMS-only quiz items or mastery exam answer keys from this export.

### Asset references

| Label | URL |
| --- | --- |
| Video thumbnail: Lytics Data Flow | `https://cdn.jwplayer.com/v2/media/FQmwlMCw/poster.jpg?width=720` |
| Video thumbnail: Lytics JavaScript Tag | `https://cdn.jwplayer.com/v2/media/d4p068ys/poster.jpg?width=720` |
| Video thumbnail: Jobs and Authorizations | `https://cdn.jwplayer.com/v2/media/igYBULhs/poster.jpg?width=720` |
| Jobs - Authorizations Nav.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltd91798a2a2383c49/686b9df7b6f4b47e96082834/Jobs_-_Authorizations_Nav.png` |
| mailchimp-job-type.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltf4d8083e08623956/686b9e903687732f96eec3b9/mailchimp-job-type.png` |
| aws-job-types.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltc4e076777850d748/686b9e914b4fe965bf5c4f12/aws-job-types.png` |
| jobs-wizard.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt51edc03d0040b5f7/686b9fb54b4fe98df75c4f23/jobs-wizard.png` |
| lytics-job-summary-example.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltb0631065fb8f196c/686ba0fff1c7c2d1154ec68f/lytics-job-summary-example.png` |
| Lytics\_authorizations\_dashboard.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltf35e98764b7c09c4/686ba21828e271aa3aa3b482/Lytics_authorizations_dashboard.png` |
| auth-creation-in-job-wizard.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt6d58d1161a4d4755/686ba21802334227e192e5ee/auth-creation-in-job-wizard.png` |
| authorizations-summary-example.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt2d34fee5ed4eaa19/686ba3ef321163ea84005610/authorizations-summary-example.png` |
| Video thumbnail: Lookalike Models | `https://cdn.jwplayer.com/v2/media/oXPDbF1w/poster.jpg?width=720` |
| Click Laboratory > UI Models.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltb24a9f4465cfcc2c/686bd5514e325563d15687ae/Click_Laboratory_UI_Models.png` |
| click-create-new-model.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt49a93af1fddafc74/686be0df47548f406f569af6/click-create-new-model.png` |
| lytics-lookalike-model-summary.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt9aae40f62f0e263e/686be6318f61ad7bdddc66d6/lytics-lookalike-model-summary.png` |
| accuracy-reach.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt62f4dd47b1e7fbb7/686bec9c77a15576584c156d/accuracy-reach.png` |
| accuracy-reach-graph.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt71c82eed35bbc7b9/686bec9d167482a55e1b16f9/accuracy-reach-graph.png` |
| model-summary.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt33a305dda3cad71c/686bec9d588d4685e5838125/model-summary.png` |
| model\_usage.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt41dfebed02ce28ed/686bf67219cab28ee2cecd4b/model_usage.png` |
| Lytics\_Lookalike\_Models\_audience\_selection\_diagram.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt69987467fd5b4f68/686c0d29f1c7c242864ecc5f/Lytics_Lookalike_Models_audience_selection_diagram.png` |
| Video thumbnail: Lytics Query Language | `https://cdn.jwplayer.com/v2/media/g3xlbE7o/poster.jpg?width=720` |

### External links

| Label | URL |
| --- | --- |
| Contentstack Academy home | `https://www.contentstack.com/academy/` |
| Training instance setup | `https://www.contentstack.com/academy/training-instance` |
| Academy playground (GitHub) | `https://github.com/contentstack/contentstack-academy-playground` |
| Contentstack documentation | `https://www.contentstack.com/docs/` |
| How does Lytics work? | `https://learn.lytics.com/documentation/product/features/getting-started/how-does-lytics-work` |
| Onboarding Web Data | `https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/onboarding-web-data` |
| Integrated Marketing Tools | `https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/integrated-marketing-tools` |
| Data Streams | `https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/data-streams` |
| Data Schema | `https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/schema-audit` |
| User Fields | `https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/user-fields` |
| JavaScript Snippet | `https://app.lytics.com/connect?view=v3` |
| Installation & Configuration | `https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/using-version-3/installation-configuration` |
| Version 3 documentation | `https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/version-3-improvements` |
| Working with Tag Managers | `https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/working-with-tag-managers` |
| Lytics JavaScript Tag introduction | `https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/introduction` |
| Collecting Data with V3 Tag | `https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/using-version-3/collecting-data` |
| Troubleshooting: Verifying Data is sent to Lytics | `https://learn.lytics.com/documentation/product/features/lytics-javascript-tag/troubleshooting#verify-data-is-being-sent-to-lytics` |
| Installing Lytics Image Pixel | `https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/onboarding-web-data#lytics-image-pixel` |
| Jobs - Authorizations Nav.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltd91798a2a2383c49/686b9df7b6f4b47e96082834/Jobs_-_Authorizations_Nav.png` |
| mailchimp-job-type.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltf4d8083e08623956/686b9e903687732f96eec3b9/mailchimp-job-type.png` |
| aws-job-types.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltc4e076777850d748/686b9e914b4fe965bf5c4f12/aws-job-types.png` |
| jobs-wizard.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/blt51edc03d0040b5f7/686b9fb54b4fe98df75c4f23/jobs-wizard.png` |
| Jobs Dashboard | `https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/jobs/jobs-dashboard` |
| lytics-job-summary-example.png | `https://images.contentstack.io/v3/assets/bltebc53cfaf0dd6403/bltb0631065fb8f196c/686ba0fff1c7c2d1154ec68f/lytics-job-summary-example.png` |
| Job Summary | `https://learn.lytics.com/documentation/product/features/data-onboarding-and-management/jobs/job-summary` |
