Skip to content

Commit

Permalink
revert the added okwarning to user guide
Browse files Browse the repository at this point in the history
  • Loading branch information
jorisvandenbossche committed Oct 19, 2023
1 parent 81b2e47 commit 2ff11b3
Show file tree
Hide file tree
Showing 5 changed files with 0 additions and 20 deletions.
2 changes: 0 additions & 2 deletions doc/source/user_guide/10min.rst
Original file line number Diff line number Diff line change
Expand Up @@ -763,14 +763,12 @@ Parquet
Writing to a Parquet file:

.. ipython:: python
:okwarning:
df.to_parquet("foo.parquet")
Reading from a Parquet file Store using :func:`read_parquet`:

.. ipython:: python
:okwarning:
pd.read_parquet("foo.parquet")
Expand Down
11 changes: 0 additions & 11 deletions doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2247,7 +2247,6 @@ For line-delimited json files, pandas can also return an iterator which reads in
Line-limited json can also be read using the pyarrow reader by specifying ``engine="pyarrow"``.

.. ipython:: python
:okwarning:
from io import BytesIO
df = pd.read_json(BytesIO(jsonl.encode()), lines=True, engine="pyarrow")
Expand Down Expand Up @@ -5372,15 +5371,13 @@ See the documentation for `pyarrow <https://arrow.apache.org/docs/python/>`__ an
Write to a parquet file.

.. ipython:: python
:okwarning:
df.to_parquet("example_pa.parquet", engine="pyarrow")
df.to_parquet("example_fp.parquet", engine="fastparquet")
Read from a parquet file.

.. ipython:: python
:okwarning:
result = pd.read_parquet("example_fp.parquet", engine="fastparquet")
result = pd.read_parquet("example_pa.parquet", engine="pyarrow")
Expand All @@ -5390,7 +5387,6 @@ Read from a parquet file.
By setting the ``dtype_backend`` argument you can control the default dtypes used for the resulting DataFrame.

.. ipython:: python
:okwarning:
result = pd.read_parquet("example_pa.parquet", engine="pyarrow", dtype_backend="pyarrow")
Expand All @@ -5404,7 +5400,6 @@ By setting the ``dtype_backend`` argument you can control the default dtypes use
Read only certain columns of a parquet file.

.. ipython:: python
:okwarning:
result = pd.read_parquet(
"example_fp.parquet",
Expand Down Expand Up @@ -5433,7 +5428,6 @@ Serializing a ``DataFrame`` to parquet may include the implicit index as one or
more columns in the output file. Thus, this code:

.. ipython:: python
:okwarning:
df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
df.to_parquet("test.parquet", engine="pyarrow")
Expand All @@ -5450,7 +5444,6 @@ If you want to omit a dataframe's indexes when writing, pass ``index=False`` to
:func:`~pandas.DataFrame.to_parquet`:

.. ipython:: python
:okwarning:
df.to_parquet("test.parquet", index=False)
Expand All @@ -5473,7 +5466,6 @@ Partitioning Parquet files
Parquet supports partitioning of data based on the values of one or more columns.

.. ipython:: python
:okwarning:
df = pd.DataFrame({"a": [0, 0, 1, 1], "b": [0, 1, 0, 1]})
df.to_parquet(path="test", engine="pyarrow", partition_cols=["a"], compression=None)
Expand Down Expand Up @@ -5539,14 +5531,12 @@ ORC format, :func:`~pandas.read_orc` and :func:`~pandas.DataFrame.to_orc`. This
Write to an orc file.

.. ipython:: python
:okwarning:
df.to_orc("example_pa.orc", engine="pyarrow")
Read from an orc file.

.. ipython:: python
:okwarning:
result = pd.read_orc("example_pa.orc")
Expand All @@ -5555,7 +5545,6 @@ Read from an orc file.
Read only certain columns of an orc file.

.. ipython:: python
:okwarning:
result = pd.read_orc(
"example_pa.orc",
Expand Down
3 changes: 0 additions & 3 deletions doc/source/user_guide/pyarrow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,6 @@ To convert a :external+pyarrow:py:class:`pyarrow.Table` to a :class:`DataFrame`,
:external+pyarrow:py:meth:`pyarrow.Table.to_pandas` method with ``types_mapper=pd.ArrowDtype``.

.. ipython:: python
:okwarning:
table = pa.table([pa.array([1, 2, 3], type=pa.int64())], names=["a"])
Expand Down Expand Up @@ -165,7 +164,6 @@ functions provide an ``engine`` keyword that can dispatch to PyArrow to accelera
* :func:`read_feather`

.. ipython:: python
:okwarning:
import io
data = io.StringIO("""a,b,c
Expand All @@ -180,7 +178,6 @@ PyArrow-backed data by specifying the parameter ``dtype_backend="pyarrow"``. A r
``engine="pyarrow"`` to necessarily return PyArrow-backed data.

.. ipython:: python
:okwarning:
import io
data = io.StringIO("""a,b,c,d,e,f,g,h,i
Expand Down
3 changes: 0 additions & 3 deletions doc/source/user_guide/scale.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,6 @@ To load the columns we want, we have two options.
Option 1 loads in all the data and then filters to what we need.

.. ipython:: python
:okwarning:
columns = ["id_0", "name_0", "x_0", "y_0"]
Expand All @@ -60,7 +59,6 @@ Option 1 loads in all the data and then filters to what we need.
Option 2 only loads the columns we request.

.. ipython:: python
:okwarning:
pd.read_parquet("timeseries_wide.parquet", columns=columns)
Expand Down Expand Up @@ -202,7 +200,6 @@ counts up to this point. As long as each individual file fits in memory, this wi
work for arbitrary-sized datasets.

.. ipython:: python
:okwarning:
%%time
files = pathlib.Path("data/timeseries/").glob("ts*.parquet")
Expand Down
1 change: 0 additions & 1 deletion doc/source/whatsnew/v2.0.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,6 @@ When this keyword is set to ``"pyarrow"``, then these functions will return pyar
* :meth:`Series.convert_dtypes`

.. ipython:: python
:okwarning:
import io
data = io.StringIO("""a,b,c,d,e,f,g,h,i
Expand Down

0 comments on commit 2ff11b3

Please sign in to comment.