This is a centralized repository to download ready-to-use Redshift SQL notebooks. You can use these notebooks as-is to demo the features along with the narration in markup cells. Additionally, you can also customize from the base version as per your customer demo requirements.
Click on the Feature of your interest to download the corresponding notebook. You can import that notbook into Redshift QueryEditorV2 and follow the narrative. Please refer documentation for guidance on importing notebooks.
If you have any questions, please reach out to redshift-specialists-amer@amazon.com.
# | Feature | Tags | Description |
---|---|---|---|
1 | Amazon Q Generative SQL capability | generativeai, amazonq, sql | Generative SQL uses AI to analyze user intent, query patterns, and schema metadata to identify common SQL query patterns directly allowing you to get insights faster in a conversational format without extensive knowledge of your organization’s complex database metadata. |
2 | Multi-data warehouse writes through data sharing Producer/ Consumer | datashare, scalability | At AWS re:Invent 2023, we extended data sharing capabilities to launch multi-data warehouse writes in preview. You can now start writing to Redshift databases from other Redshift data warehouses |
3 | Metadata security for multi-tenant applications | security, metadata | Amazon Redshift now supports metadata security that enables administrators to restrict the visibility on their catalog data based on user roles and permissions. Users can now see only the metadata for databases, schema, and tables/views that they have access to. It enables customers to deploy multi-tenant applications on a provisioned cluster or Serverless namespace. |
4 | AI-driven scaling and optimizations | serverless, scalability, performance | Amazon Redshift Serverless scales proactively and automatically with workload changes across all key dimensions —such as data volume, concurrent users, and query complexity. You just specify your desired price-performance targets to either optimize for cost or optimize for performance or balanced and serverless does the rest. |
5 | Redshift ML large language model (LLM) integration | generativeai, redshiftml | At AWS re:Invent, we announced support for LLMs as preview. Now, you can use pre-trained open source LLMs in Amazon SageMaker JumpStart as part of Redshift ML, allowing you to bring the power of LLMs to analytics. |
6 | Data lake querying with support for Apache Iceberg tables | datalake, iceberg, spectrum | At AWS re:Invent, we announced the general availability of support for Apache Iceberg tables, so you can easily access your Apache Iceberg tables on your data lake from Amazon Redshift and join it with the data in your data warehouse when needed. |
7 | Incremental refresh for materialized views on data lake tables | datalake, materializedview, spectrum | Data lake queries can benefit from incremental refreshes, which can prevent unnecessary data scans on data lake during a refresh, and reduce the time and the costs it would take to refresh materialized views for eligible queries. |
8 | Aut Copy from S3 | zeroetl, datalake, ingestion | You can now store a COPY statement into a Copy Job, which automatically loads the new files detected in the specified Amazon S3 path. Copy Jobs track previously loaded files and exclude them from the ingestion process. Their activity can be monitored using the system tables. |
9 | SUPER data type support to 16 MB | super, semistructured | Amazon Redshift now supports storing large objects, up to 16MB in size, in SUPER data type. When ingesting from JSON, PARQUET, TEXT, and CSV source files, you can load semi-structured data or documents as values in SUPER data type up to 16MB. Before this enhancement, you could load semi-structured data or documents in SUPER data type only up to 1MB. |
10 | Dynamic Data masking support on SUPER data type | security, super | You can apply dynamic data masking rules on SUPER data type columns. This gives the flexibility to apply the masking policies without breaking them into individual fields with complex ETLs. |
11 | OLAP Constructs - Rollup, Cube, Grouping Sets | sql, olap, performance | New SQL constructs -ROLLUP, CUBE, and GROUPING SETS, to simplify building multi-dimensional analytics applications |
12 | Redshift Serverless most frequent monitoring queries | serverless, monitoring, performance | You can use queries in this notebook to address questions like 1/ How to monitor queries based on status? 2/ How to monitor specific query elapsed time breakdown details? 3/ How to monitor Redshift serverless usage cost by day? 4/ How to monitor data loads (copy commands)? etc., Useful to share for serverless observability |
13 | Performance Analysis on Provisioned cluster | provisioned, monitoring, performance | Most frequently used performance monitoring queries such as 1/ Identify long running queries 2/ Study query volume 3/ Concurrency Scaling usage etc., |
14 | Redshift Auto ML | redshiftml | You can use this notebook to demonstrate core Redshift ML capabilities to create, train, deploy and run inferences using simple SQL commands. |
15 | Redshift ML integration with Amazon Forecast | redshiftml | With Amazon Redshift ML, you can now leverage Amazon Forecast, a ML-based time-series forecasting service, without having to learn any new tools or having to create pipelines to move your data. |
16 | Redshift Spectrum | spectrum, datalake | You can use this notebook to demo how you can query data in S3 datalake and your data warehouse using single query. |
17 | Redshift streaming ingestion | ingestion, realtime, zeroetl | Generate near real time insights through streaming data ingestion into your data warehouse and data visualizations. You can use this notebook to demonstrate directly ingesting streaming data into your data warehouse from Kinesis Data Streams without the need to stage in Amazon S3. |
18 | RLS on views, late binding views and CONJUNCTION | security, rls | It covers how to implement row level security on views and late binding views. Also it covers combining multiple policies per user using CONJUNCTION TYPE for row-level security (RLS) policies |