Releases
6.1.0
6.1.0 (2023-08-03)
Features
api: add ibis.dtype
top-level API (867e5f1 )
api: add table.nunique()
for counting unique table rows (adcd762 )
api: allow mixing literals and columns in ibis.array
(3355dd8 )
api: improve efficiency of __dataframe__
protocol (15e27da )
api: support boolean literals in join API (c56376f )
arrays: add concat
method equivalent to __add__
/__radd__
(0ed0ab1 )
arrays: add repeat
method equivalent to __mul__
/__rmul__
(b457c7b )
backends: add current_schema
API (955a9d0 )
bigquery: fill out CREATE TABLE
DDL options including support for overwrite
(5dac7ec )
datafusion: add count_distinct, median, approx_median, stddev and var aggregations (45089c4 )
datafusion: add extract url fields functions (4f5ea98 )
datafusion: add functions sign, power, nullifzero, log (ef72e40 )
datafusion: add RegexSearch, StringContains and StringJoin (4edaab5 )
datafusion: implement in-memory table (d4ec5c2 )
flink: add tests and translation rules for additional operators (fc2aa5d )
flink: implement translation rules and tests for over aggregation in Flink backend (e173cd7 )
flink: implement translation rules for literal expressions in flink compiler (a8f4880 )
improved error messages when missing backend dependencies (2fe851b )
make output of to_sql
a proper str
subclass (084bdb9 )
pandas: add ExtractURLField functions (e369333 )
polars: implement ops.SelfReference
(983e393 )
pyspark: read/write delta tables (d403187 )
refactor ddl for create_database and add create_schema where relevant (d7a857c )
sqlite: add scalar python udf support to sqlite (92f29e6 )
sqlite: implement extract url field functions (cb1956f )
trino: implement support for .sql
table expression method (479bc60 )
trino: support table properties when creating a table (b9d65ef )
Bug Fixes
api: allow scalar window order keys (3d3f4f3 )
backends: make current_database
implementation and API consistent across all backends (eeeeee0 )
bigquery: respect the fully qualified table name at the init (a25f460 )
clickhouse: check dispatching instead of membership in the registry for has_operation
(acb7f3f )
datafusion: always quote column names to prevent datafusion from normalizing case (310db2b )
deps: update dependency datafusion to v27 (3a311cd )
druid: handle conversion issues from string, binary, and timestamp (b632063 )
duckdb: avoid double escaping backslashes for bind parameters (8436f57 )
duckdb: cast read_only to string for connection (27e17d6 )
duckdb: deduplicate results from list_schemas()
(172520e )
duckdb: ensure that current_database returns the correct value (2039b1e )
duckdb: handle conversion from duckdb_engine unsigned int aliases (e6fd0cc )
duckdb: map hugeint to decimal to avoid information loss (4fe91d4 )
duckdb: run pre-execute-hooks in duckdb before file export (5bdaa1d )
duckdb: use regexp_matches to ensure that matching checks containment instead of a full match (0a0cda6 )
examples: remove example datasets that are incompatible with case-insensitive file systems (4048826 )
exprs: ensure that left_semi and semi are equivalent (bbc1eb7 )
forward arguments through __dataframe__
protocol (50f3be9 )
ir: change "it not a" to "is not a" in errors (d0d463f )
memtable: implement support for translation of empty memtable (05b02da )
mysql: fix UUID type reflection for sqlalchemy 2.0.18 (12d4039 )
mysql: pass-through kwargs to connect_args (e3f3e2d )
ops: ensure that name attribute is always valid for ops.SelfReference
(9068aca )
polars: ensure that pivot_longer
works with more than one column (822c912 )
polars: fix collect implementation (c1182be )
postgres: by default use domain socket (e44fdfb )
pyspark: make has_operation
method a [@classmethod](https://github.com/classmethod)
(c1b7dbc )
release: use @google/[email protected] to avoid module loading bug (673aab3 )
snowflake: fix broken unnest functionality (207587c )
snowflake: reset the schema and database to the original schema after creating them (54ce26a )
snowflake: reset to original schema when resetting the database (32ff832 )
snowflake: use regexp_instr != 0
instead of REGEXP
keyword (06e2be4 )
sqlalchemy: add support for sqlalchemy string subclassed types (8b33b35 )
sql: handle parsing aliases (3645cf4 )
trino: handle all remaining common datatype parsing (b3778c7 )
trino: remove filter index warning in Trino dialect (a2ae7ae )
Documentation
add conda/mamba install instructions for specific backends (c643fca )
add docstrings to DataType.is_*
methods (ed40fdb )
backend-matrix: add ability to select a specific subset of backends (f663066 )
backends: document memtable support and performance for each backend (b321733 )
blog: v6.0.0 release blog (21fc5da )
document versioning policy (242ea15 )
dot-sql: add examples of mixing ibis expressions and SQL strings (5abd30e )
dplyr: small fixes to the dplyr getting started guide (4b57f7f )
expand docstring for dtype
function (39b7a24 )
fix functions names in examples of extract url fields (872445e )
fix heading in 6.0.0 blog (0ad3ce2 )
oracle: add note about old password checks in oracle (470b90b )
postgres: fix postgres memtable docs (7423eb9 )
release-notes: fix typo (a319e3a )
social: add social media preview cards (e98a0a6 )
update imports/exports for pyspark backend (16d73c4 )
Refactors
pyarrow: remove unnecessary calls to combine_chunks (c026d2d )
pyarrow: use schema.empty_table()
instead of manually constructing empty tables (c099302 )
result-handling: remove result_handler
in favor of expression specific methods (3dc7143 )
snowflake: enable multiple statements and clean up duplicated parameter setting code (75824a6 )
tests: clean up backend test setup to make non-data-loading steps atomic (16b4632 )
You canβt perform that action at this time.