CARVIEW |
- Installation
- Documentation
- Getting Started
- Connect
- Data Import
- Overview
- Data Sources
- CSV Files
- JSON Files
- Overview
- Creating JSON
- Loading JSON
- Writing JSON
- JSON Type
- JSON Functions
- Format Settings
- Installing and Loading
- SQL to / from JSON
- Caveats
- Multiple Files
- Parquet Files
- Partitioning
- Appender
- INSERT Statements
- Client APIs
- Overview
- ADBC
- C
- Overview
- Startup
- Configuration
- Query
- Data Chunks
- Vectors
- Values
- Types
- Prepared Statements
- Appender
- Table Functions
- Replacement Scans
- API Reference
- C++
- CLI
- Dart
- Go
- Java (JDBC)
- Julia
- Node.js (Deprecated)
- Node.js (Neo)
- ODBC
- PHP
- Python
- Overview
- Data Ingestion
- Conversion between DuckDB and Python
- DB API
- Relational API
- Function API
- Types API
- Expression API
- Spark API
- API Reference
- Known Python Issues
- R
- Rust
- Swift
- Wasm
- SQL
- Introduction
- Statements
- Overview
- ANALYZE
- ALTER TABLE
- ALTER VIEW
- ATTACH and DETACH
- CALL
- CHECKPOINT
- COMMENT ON
- COPY
- CREATE INDEX
- CREATE MACRO
- CREATE SCHEMA
- CREATE SECRET
- CREATE SEQUENCE
- CREATE TABLE
- CREATE VIEW
- CREATE TYPE
- DELETE
- DESCRIBE
- DROP
- EXPORT and IMPORT DATABASE
- INSERT
- LOAD / INSTALL
- PIVOT
- Profiling
- SELECT
- SET / RESET
- SET VARIABLE
- SUMMARIZE
- Transaction Management
- UNPIVOT
- UPDATE
- USE
- VACUUM
- Query Syntax
- SELECT
- FROM and JOIN
- WHERE
- GROUP BY
- GROUPING SETS
- HAVING
- ORDER BY
- LIMIT and OFFSET
- SAMPLE
- Unnesting
- WITH
- WINDOW
- QUALIFY
- VALUES
- FILTER
- Set Operations
- Prepared Statements
- Data Types
- Overview
- Array
- Bitstring
- Blob
- Boolean
- Date
- Enum
- Interval
- List
- Literal Types
- Map
- NULL Values
- Numeric
- Struct
- Text
- Time
- Timestamp
- Time Zones
- Union
- Typecasting
- Expressions
- Overview
- CASE Expression
- Casting
- Collations
- Comparisons
- IN Operator
- Logical Operators
- Star Expression
- Subqueries
- TRY
- Functions
- Overview
- Aggregate Functions
- Array Functions
- Bitstring Functions
- Blob Functions
- Date Format Functions
- Date Functions
- Date Part Functions
- Enum Functions
- Interval Functions
- Lambda Functions
- List Functions
- Map Functions
- Nested Functions
- Numeric Functions
- Pattern Matching
- Regular Expressions
- Struct Functions
- Text Functions
- Time Functions
- Timestamp Functions
- Timestamp with Time Zone Functions
- Union Functions
- Utility Functions
- Window Functions
- Constraints
- Indexes
- Meta Queries
- DuckDB's SQL Dialect
- Overview
- Indexing
- Friendly SQL
- Keywords and Identifiers
- Order Preservation
- PostgreSQL Compatibility
- SQL Quirks
- Samples
- Configuration
- Extensions
- Overview
- Installing Extensions
- Advanced Installation Methods
- Distributing Extensions
- Versioning of Extensions
- Troubleshooting of Extensions
- Core Extensions
- Overview
- AutoComplete
- Avro
- AWS
- Azure
- Delta
- DuckLake
- Encodings
- Excel
- Full Text Search
- httpfs (HTTP and S3)
- Iceberg
- Overview
- Iceberg REST Catalogs
- Amazon S3 Tables
- Amazon SageMaker Lakehouse (AWS Glue)
- Troubleshooting
- ICU
- inet
- jemalloc
- MySQL
- PostgreSQL
- Spatial
- SQLite
- TPC-DS
- TPC-H
- UI
- VSS
- Guides
- Overview
- Data Viewers
- Database Integration
- File Formats
- Overview
- CSV Import
- CSV Export
- Directly Reading Files
- Excel Import
- Excel Export
- JSON Import
- JSON Export
- Parquet Import
- Parquet Export
- Querying Parquet Files
- File Access with the file: Protocol
- Network and Cloud Storage
- Overview
- HTTP Parquet Import
- S3 Parquet Import
- S3 Parquet Export
- S3 Iceberg Import
- S3 Express One
- GCS Import
- Cloudflare R2 Import
- DuckDB over HTTPS / S3
- Fastly Object Storage Import
- Meta Queries
- Describe Table
- EXPLAIN: Inspect Query Plans
- EXPLAIN ANALYZE: Profile Queries
- List Tables
- Summarize
- DuckDB Environment
- ODBC
- Performance
- Overview
- Environment
- Import
- Schema
- Indexing
- Join Operations
- File Formats
- How to Tune Workloads
- My Workload Is Slow
- Benchmarks
- Working with Huge Databases
- Python
- Installation
- Executing SQL
- Jupyter Notebooks
- marimo Notebooks
- SQL on Pandas
- Import from Pandas
- Export to Pandas
- Import from Numpy
- Export to Numpy
- SQL on Arrow
- Import from Arrow
- Export to Arrow
- Relational API on Pandas
- Multiple Python Threads
- Integration with Ibis
- Integration with Polars
- Using fsspec Filesystems
- SQL Editors
- SQL Features
- Snippets
- Creating Synthetic Data
- Dutch Railway Datasets
- Sharing Macros
- Analyzing a Git Repository
- Importing Duckbox Tables
- Copying an In-Memory Database to a File
- Troubleshooting
- Glossary of Terms
- Browsing Offline
- Operations Manual
- Overview
- DuckDB's Footprint
- Logging
- Securing DuckDB
- Non-Deterministic Behavior
- Limits
- Development
- DuckDB Repositories
- Profiling
- Building DuckDB
- Overview
- Build Configuration
- Building Extensions
- Android
- Linux
- macOS
- Raspberry Pi
- Windows
- Python
- R
- Troubleshooting
- Unofficial and Unsupported Platforms
- Benchmark Suite
- Testing
- Internals
- Why DuckDB
- Code of Conduct
- Release Calendar
- Roadmap
- Sitemap
- Live Demo
Deprecated DuckDB 1.3.0 deprecated the old lambda single arrow syntax (
x -> x + 1
) in favor of the Python-style syntax (lambda x : x + 1
).DuckDB 1.3.0 also introduces a new setting to configure the lambda syntax.
SET lambda_syntax = 'DEFAULT'; SET lambda_syntax = 'ENABLE_SINGLE_ARROW'; SET lambda_syntax = 'DISABLE_SINGLE_ARROW';
Currently,
DEFAULT
enables both syntax styles, i.e., the old single arrow syntax and the Python-style syntax.DuckDB 1.4.0 will be the last release supporting the single arrow syntax without explicitly enabling it.
DuckDB 1.5.0 disables the single arrow syntax on default.
DuckDB 1.6.0 removes the
lambda_syntax
flag and fully deprecates the single arrow syntax, so the old behavior will no longer be possible.
Lambda functions enable the use of more complex and flexible expressions in queries.
DuckDB supports several scalar functions that operate on LIST
s and
accept lambda functions as parameters
in the form lambda parameter1, parameter2, ... : expression
.
If the lambda function has only one parameter, then the parentheses can be omitted.
The parameters can have any names.
For example, the following are all valid lambda functions:
lambda param : param > 1
lambda s : contains(concat(s, 'DB'), 'duck')
lambda acc, x : acc + x
Scalar Functions That Accept Lambda Functions
Name | Description |
---|---|
list_transform(list, lambda(x)) |
Returns a list that is the result of applying the lambda function to each element of the input list. The return type is defined by the return type of the lambda function. See list_transform examples. |
list_filter(list, lambda(x)) |
Constructs a list from those elements of the input list for which the lambda function returns true . DuckDB must be able to cast the lambda function's return type to BOOL . The return type of list_filter is the same as the input list's. See list_filter examples. |
list_reduce(list, lambda(x, y)[, initial_value] |
Reduces all elements of the input list into a single scalar value by executing the lambda function on a running result and the next list element. The lambda function has an optional initial_value argument. See list_reduce examples or details. |
list_transform(list, lambda(x))
Description | Returns a list that is the result of applying the lambda function to each element of the input list. The return type is defined by the return type of the lambda function. See list_transform examples. |
Example | list_transform([4, 5, 6], lambda x : x + 1) |
Result | [5, 6, 7] |
Aliases | array_transform , apply , list_apply , array_apply |
list_filter(list, lambda(x))
Description | Constructs a list from those elements of the input list for which the lambda function returns true . DuckDB must be able to cast the lambda function's return type to BOOL . The return type of list_filter is the same as the input list's. See list_filter examples. |
Example | list_filter([4, 5, 6], lambda x : x > 4) |
Result | [5, 6] |
Aliases | array_filter , filter |
list_reduce(list, lambda(x, y)[, initial_value]
Description | Reduces all elements of the input list into a single scalar value by executing the lambda function on a running result and the next list element. The lambda function has an optional initial_value argument. See list_reduce examples or details. |
Example | list_reduce([1, 2, 3], lambda x, y : x + y, 100) |
Result | 106 |
Aliases | array_reduce , reduce |
Nesting Lambda Functions
All scalar functions can be arbitrarily nested. For example, nested lambda functions to get all squares of even list elements:
SELECT list_transform(
list_filter([0, 1, 2, 3, 4, 5], lambda x: x % 2 = 0),
lambda y: y * y
);
[0, 4, 16]
Nested lambda function to add each element of the first list to the sum of the second list:
SELECT list_transform(
[1, 2, 3],
lambda x :
list_reduce([4, 5, 6], lambda a, b: a + b) + x
);
[16, 17, 18]
Scoping
Lambda functions confirm to scoping rules in the following order:
- inner lambda parameters
- outer lambda parameters
- column names
- macro parameters
CREATE TABLE tbl (x INTEGER);
INSERT INTO tbl VALUES (10);
SELECT list_apply(
[1, 2],
lambda x: list_apply([4], lambda x: x + tbl.x)[1] + x
)
FROM tbl;
[15, 16]
Indexes as Parameters
All lambda functions accept an optional extra parameter that represents the index of the current element.
This is always the last parameter of the lambda function (e.g., i
in (x, i)
), and is 1-based (i.e., the first element has index 1).
Get all elements that are larger than their index:
SELECT list_filter([1, 3, 1, 5], lambda x, i: x > i);
[3, 5]
Examples
list_transform
Examples
Incrementing each list element by one:
SELECT list_transform([1, 2, NULL, 3], lambda x: x + 1);
[2, 3, NULL, 4]
Transforming strings:
SELECT list_transform(['Duck', 'Goose', 'Sparrow'], lambda s: concat(s, 'DB'));
[DuckDB, GooseDB, SparrowDB]
Combining lambda functions with other functions:
SELECT list_transform([5, NULL, 6], lambda x: coalesce(x, 0) + 1);
[6, 1, 7]
list_filter
Examples
Filter out negative values:
SELECT list_filter([5, -6, NULL, 7], lambda x: x > 0);
[5, 7]
Divisible by 2 and 5:
SELECT list_filter(
list_filter([2, 4, 3, 1, 20, 10, 3, 30], lambda x: x % 2 = 0),
lambda y: y % 5 = 0
);
[20, 10, 30]
In combination with range(...)
to construct lists:
SELECT list_filter([1, 2, 3, 4], lambda x: x > #1) FROM range(4);
[1, 2, 3, 4]
[2, 3, 4]
[3, 4]
[4]
list_reduce
Examples
Sum of all list elements:
SELECT list_reduce([1, 2, 3, 4], lambda acc, x: acc + x);
10
Only add up list elements if they are greater than 2:
SELECT list_reduce(
list_filter([1, 2, 3, 4], lambda x: x > 2),
lambda acc, x: acc + x
);
7
Concat all list elements:
SELECT list_reduce(['DuckDB', 'is', 'awesome'], lambda acc, x: concat(acc, ' ', x));
DuckDB is awesome
Concatenate elements with the index without an initial value:
SELECT list_reduce(
['a', 'b', 'c', 'd'],
lambda x, y, i: x || ' - ' || CAST(i AS VARCHAR) || ' - ' || y
);
a - 2 - b - 3 - c - 4 - d
Concatenate elements with the index with an initial value:
SELECT list_reduce(
['a', 'b', 'c', 'd'],
lambda x, y, i: x || ' - ' || CAST(i AS VARCHAR) || ' - ' || y, 'INITIAL'
);
INITIAL - 1 - a - 2 - b - 3 - c - 4 - d
Limitations
Subqueries in lambda expressions are currently not supported. For example:
SELECT list_apply([1, 2, 3], lambda x: (SELECT 42) + x);
Binder Error:
subqueries in lambda expressions are not supported