AudienceStore and EventStore

This article describes how to access your AudienceStore and EventStore services.


These services must be activated for your account. Contact your account manager for more information.

How it works

AudienceStore and EventStore services store your unstructured audience and event data as compressed JSON files on Tealium’s Amazon S3 bucket. The Tealium Collect tag (or other Collect library) sends event data to the Customer Data Hub where it’s captured in Tealium’s Amazon S3 bucket as flattened JSON data. When the S3 bucket reaches a size of 100 MB (uncompressed) or 1 hour elapses, whichever comes first, the data is compressed and prepared for Redshift. The compressed data is then copied and imported into the Redshift database for your account.

AudienceStore file paths

Data files for AudienceStream are stored in the S3 bucket with a path structure that matches your account name, profile name, and the action ID of the AudienceStore connector:


The action ID can be found in the Details of the AudienceStore connector action, as seen here:

Action ID in AudienceStream detail

Example path to AudienceStore file:


EventStore file paths

Data files for EventStore are stored in the S3 bucket as follows:


The event feed ID can be found in the URL for the event feed. Navigate to Live Events and click the feed to select it and inspect the URL to retrieve the feed ID, as seen here:

EventStore ID in URL highlighted.png

Example path to EventStore file:


View files in the console

You can browse the files associated with each audience or event feed using the DataAccess console. This is an easy way to verify that data is flowing through the system and a quick way to download a sample file to get familiar with the format.

Use the following steps to access files using the DataAccess console:

  1. In the sidebar, go to DataAccess > EventStore or DataAccess > AudienceStore.
  2. Select the number of weeks of data to display.
  3. Select the name of the Event Feed or the AudienceStore Action.
  4. Click Reload.
  5. Click a date to expand the list of file details.
  6. Find the file you want and click Download.

The .gzip file is saved to your computer where you can use an unzip utility to open the file.

Using third-party tools to view files

You can also access files using third-party tools such as an FTP client or the Amazon S3 command line interface. To allow these tools access to your files, you need the Amazon S3 access key for your S3 bucket.

Amazon S3 access key

To get the Amazon S3 access key:

  1. In the sidebar, go to DataAccess > EventStore or DataAccess > AudienceStore.
  2. Click Get Amazon Access Key. The following fields are displayed:
    • Access Key ID
    • Secret Access Key
    • Path

For security purposes the secret access key is only displayed once, so it’s important to store it securely for later use.

If you ever lose this value you can regenerate a new one, but it invalidates all previous connections that used the old value.

FTP clients with Amazon S3 support

We recommend using a desktop application for a more convenient method of downloading a large number of files.

Here are some client applications that work with Amazon S3.

  • Windows: Cyberduck, CrossFTP
  • Mac: Cyberduck, CrossFTP, Transmit

The primary benefit of using a GUI-based FTP client with S3 support is that you can point-and-click on individual files and folders to download from Amazon S3.

The following screen capture of Cyberduck shows how to configure the connection. Note that the configuration wizard does not have a field for the Secret Access Key. You are prompted for the Secret Access Key upon a connection attempt. The Secret Access Key is saved for future use.


View files using the AWS command line interface

For more technical users, the AWS Command Line Interface (CLI) can be installed to give you full control over accessing your data files. The primary benefit of using a Amazon CLI is the ability to customize for your specific needs, such as syncing and automating the file retrieval from Amazon S3.

The following are a few of the uses for the Amazon CLI:

  • Initial bulk download of all historical log files
  • Schedule hourly incremental download to grab only the newest generated log files
  • Synchronize a local folder on your desktop or server to a remote folder on S3 so that they contain exactly the same log file content
  • Download files before and/or after a certain LastModified date

To install the Amazon CLI, refer to the following Amazon instructions: Installing, updating, and uninstalling the AWS CLI

When you call aws configure, you are prompted for your Access Key and Access Key ID (you can leave Region Name and Output Format blank).

After you configure the CLI, you can make queries using the s3 method. The s3method is used in the following CLI examples.

List objects in S3

The list-objects method lists objects in your S3 directory. This is needed to get the key for each object to download individual files.

List all objects in the root folder:

aws s3 ls s3://dataaccess-<region><account>/<profile>/

List all event feeds

aws s3 ls s3://dataaccess-<region><account>/<profile>/events/

List all objects in a specific events folder

aws s3 ls s3://dataaccess-<region><account>/<profile>/events/<feed-id>/

List all audiences

aws s3 ls s3://<account>/<profile>/audiences/

List all objects in a specific audiences folder

aws s3 ls s3://dataaccess-<region><account>/<profile>/<action-id>/

Get a single events object

The get-object method downloads one specific remote key to a local location on your desktop or server.

aws s3 cp s3://dataaccess-<region><account>/<profile>/events/{feed-id}/{filename}.gz ./

Get a single audiences object

The get-object method downloads one specific remote key to a local location on your desktop or server.

aws s3 cp s3://dataaccess-<region><account>/<profile>/audiences/{action-id}/{filename}.gz ./

Synchronize local and remote folders

The sync method takes a remote folder on Amazon S3 and synchronizes it with a local folder on your desktop or server. The following example synchronizes a specific remote EventStream or AudienceStream folder to a local folder on the desktop.

The --dryrun argument shows you what files would actually sync, without actually doing the download. To execute the actual download, remove the --dryrun argument.

aws s3 sync s3://{account}/{profile}/audiences// \
    ~/Desktop/temp --dryrun

Lastly, you can also filter the sync method to only download files matching a specific filter. In this example, only the files that match the wildcard filter of “*2015.06.14*” are downloaded.

aws s3 sync s3://{account}/{profile}/events// \
    ~/Desktop/temp --exclude "*" --include "*2015.06.14*" --dryrun

Was this page helpful?

This page was last updated: January 7, 2023