<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Apache Flink Documentation on Apache Flink</title>
    <link>//nightlies.apache.org/flink/flink-docs-release-2.2/</link>
    <description>Recent content in Apache Flink Documentation on Apache Flink</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <atom:link href="//nightlies.apache.org/flink/flink-docs-release-2.2/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Event Processing (CEP)</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/libs/cep/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/libs/cep/</guid>
      <description>FlinkCEP - Complex event processing for Flink # FlinkCEP is the Complex Event Processing (CEP) library implemented on top of Flink. It allows you to detect event patterns in an endless stream of events, giving you the opportunity to get hold of what&amp;rsquo;s important in your data.&#xA;This page describes the API calls available in Flink CEP. We start by presenting the Pattern API, which allows you to specify the patterns that you want to detect in your stream, before presenting how you can detect and act upon matching event sequences.</description>
    </item>
    <item>
      <title>Execution Configuration</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/execution/execution_configuration/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/execution/execution_configuration/</guid>
      <description>Execution Configuration # The StreamExecutionEnvironment contains the ExecutionConfig which allows to set job specific configuration values for the runtime. To change the defaults that affect all jobs, see Configuration.&#xA;Java StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); ExecutionConfig executionConfig = env.getConfig(); Python env = StreamExecutionEnvironment.get_execution_environment() execution_config = env.get_config() The following configuration options are available: (the default is bold)&#xA;setClosureCleanerLevel(). The closure cleaner level is set to ClosureCleanerLevel.RECURSIVE by default. The closure cleaner removes unneeded references to the surrounding class of anonymous functions inside Flink programs.</description>
    </item>
    <item>
      <title>First steps</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/try-flink/local_installation/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/try-flink/local_installation/</guid>
      <description>First steps # Welcome to Flink! :)&#xA;Flink is designed to process continuous streams of data at a lightning fast pace. This short guide will show you how to download the latest stable version of Flink, install, and run it. You will also run an example Flink job and view it in the web UI.&#xA;Downloading Flink # Note: Flink is also available as a Docker image. Flink runs on all UNIX-like environments, i.</description>
    </item>
    <item>
      <title>Formats</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/overview/</guid>
      <description> Formats # Flink provides a set of table formats that can be used with table connectors. A table format is a storage format that defines how to map binary data onto table columns.&#xA;Flink supports the following formats:&#xA;Formats Supported Connectors CSV Apache Kafka, Upsert Kafka, Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, Filesystem JSON Apache Kafka, Upsert Kafka, Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, Filesystem, Elasticsearch Apache Avro Apache Kafka, Upsert Kafka, Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, Filesystem Confluent Avro Apache Kafka, Upsert Kafka Protobuf Apache Kafka Debezium CDC Apache Kafka, Filesystem Canal CDC Apache Kafka, Filesystem Maxwell CDC Apache Kafka, Filesystem OGG CDC Apache Kafka, Filesystem Apache Parquet Filesystem Apache ORC Filesystem Raw Apache Kafka, Upsert Kafka, Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, Filesystem </description>
    </item>
    <item>
      <title>Intro to the Python DataStream API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/intro_to_datastream_api/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/intro_to_datastream_api/</guid>
      <description>Intro to the Python DataStream API # DataStream programs in Flink are regular programs that implement transformations on data streams (e.g., filtering, updating state, defining windows, aggregating). The data streams are initially created from various sources (e.g., message queues, socket streams, files). Results are returned via sinks, which may for example write the data to files, or to standard output (for example the command line terminal).&#xA;Python DataStream API is a Python version of DataStream API which allows Python users could write Python DatStream API jobs.</description>
    </item>
    <item>
      <title>OpenAI</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/models/openai/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/models/openai/</guid>
      <description>OpenAI # The OpenAI Model Function allows Flink SQL to call OpenAI API for inference tasks.&#xA;Overview # The function supports calling remote OpenAI model services via Flink SQL for prediction/inference tasks. Currently, the following tasks are supported:&#xA;Chat Completions: generate a model response from a list of messages comprising a conversation. Embeddings: get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/overview/</guid>
      <description>Concepts # The Hands-on Training explains the basic concepts of stateful and timely stream processing that underlie Flink&amp;rsquo;s APIs, and provides examples of how these mechanisms are used in applications. Stateful stream processing is introduced in the context of Data Pipelines &amp;amp; ETL and is further developed in the section on Fault Tolerance. Timely stream processing is introduced in the section on Streaming Analytics.&#xA;This Concepts in Depth section provides a deeper understanding of how Flink&amp;rsquo;s architecture and runtime implement these concepts.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/overview/</guid>
      <description>DataStream Formats # Available Formats # Formats define how information is encoded for storage. Currently these formats are supported:&#xA;Avro Azure Table Hadoop Parquet Text files Back to top</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/overview/</guid>
      <description>DataStream Connectors # Predefined Sources and Sinks # A few basic data sources and sinks are built into Flink and are always available. The predefined data sources include reading from files, directories, and sockets, and ingesting data from collections and iterators. The predefined data sinks support writing to files, to stdout and stderr, and to sockets.&#xA;Flink Project Connectors # Connectors provide code for interfacing with various third-party systems. Currently these systems are supported as part of the Apache Flink project:</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/overview/</guid>
      <description>Table &amp;amp; SQL Connectors # Flink&amp;rsquo;s Table API &amp;amp; SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/overview/</guid>
      <description>File Systems # Apache Flink uses file systems to consume and persistently store data, both for the results of applications and for fault tolerance and recovery. These are some of most of the popular file systems, including local, hadoop-compatible, Amazon S3, Aliyun OSS and Azure Blob Storage.&#xA;The file system used for a particular file is determined by its URI scheme. For example, file:///home/user/text.txt refers to a file in the local file system, while hdfs://namenode:50010/data/user/text.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/ha/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/ha/overview/</guid>
      <description>High Availability # JobManager High Availability (HA) hardens a Flink cluster against JobManager failures. This feature ensures that a Flink cluster will always continue executing your submitted jobs.&#xA;JobManager High Availability # The JobManager coordinates every Flink deployment. It is responsible for both scheduling and resource management.&#xA;By default, there is a single JobManager instance per Flink cluster. This creates a single point of failure (SPOF): if the JobManager crashes, no new programs can be submitted and running programs fail.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/overview/</guid>
      <description>Deployment # Flink is a versatile framework, supporting many different deployment scenarios in a mix and match fashion.&#xA;Below, we briefly explain the building blocks of a Flink cluster, their purpose and available implementations. If you just want to start Flink locally, we recommend setting up a Standalone Cluster.&#xA;Overview and Reference Architecture # The figure below shows the building blocks of every Flink cluster. There is always somewhere a client running.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/overview/</guid>
      <description>Project Configuration # The guides in this section will show you how to configure your projects via popular build tools (Maven, Gradle), add the necessary dependencies (i.e. connectors and formats, testing), and cover some advanced configuration topics.&#xA;Every Flink application depends on a set of Flink libraries. At a minimum, the application depends on the Flink APIs and, in addition, on certain connector libraries (i.e. Kafka, Cassandra) and 3rd party dependencies required to the user to develop custom functions to process the data.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/serialization/types_serialization/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/serialization/types_serialization/</guid>
      <description>Data Types &amp;amp; Serialization # Apache Flink handles data types and serialization in a unique way, containing its own type descriptors, generic type extraction, and type serialization framework. This document describes the concepts and the rationale behind them.&#xA;Supported Data Types # Flink places some restrictions on the type of elements that can be in a DataStream. The reason for this is that the system analyzes the types to determine efficient execution strategies.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/overview/</guid>
      <description>Operators # Operators transform one or more DataStreams into a new DataStream. Programs can combine multiple transformations into sophisticated dataflow topologies.&#xA;This section gives a description of the basic transformations, the effective physical partitioning after applying those as well as insights into Flink&amp;rsquo;s operator chaining.&#xA;DataStream Transformations # Map # DataStream → DataStream # Takes one element and produces one element. A map function that doubles the values of the input stream:</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/overview/</guid>
      <description>Flink DataStream API Programming Guide # DataStream programs in Flink are regular programs that implement transformations on data streams (e.g., filtering, updating state, defining windows, aggregating). The data streams are initially created from various sources (e.g., message queues, socket streams, files). Results are returned via sinks, which may for example write the data to files, or to standard output (for example the command line terminal). Flink programs run in a variety of contexts, standalone, or embedded in other programs.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/operators/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/operators/overview/</guid>
      <description>Operators # Operators transform one or more DataStreams into a new DataStream. Programs can combine multiple transformations into sophisticated dataflow topologies.&#xA;DataStream Transformations # DataStream programs in Flink are regular programs that implement transformations on data streams (e.g., mapping, filtering, reducing). Please see operators for an overview of the available transformations in Python DataStream API.&#xA;Functions # Transformations accept user-defined functions as input to define the functionality of the transformations.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/overview/</guid>
      <description>Python API # PyFlink is a Python API for Apache Flink that allows you to build scalable batch and streaming workloads, such as real-time data processing pipelines, large-scale exploratory data analysis, Machine Learning (ML) pipelines and ETL processes. If you&amp;rsquo;re already familiar with Python and libraries such as Pandas, then PyFlink makes it simpler to leverage the full capabilities of the Flink ecosystem. Depending on the level of abstraction you need, there are two different APIs that can be used in PyFlink:</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/udfs/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/udfs/overview/</guid>
      <description>User-defined Functions # PyFlink Table API empowers users to do data transformations with Python user-defined functions.&#xA;Currently, it supports two kinds of Python user-defined functions: the general Python user-defined functions which process data one row at a time and vectorized Python user-defined functions which process data one batch at a time.&#xA;Bundling UDFs # To run Python UDFs (as well as Pandas UDFs) in any non-local mode, it is strongly recommended bundling your Python UDF definitions using the config option python-files, if your Python UDFs live outside the file where the main() function is defined.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/overview/</guid>
      <description>Streaming Concepts # Flink&amp;rsquo;s Table API and SQL support are unified APIs for batch and stream processing. This means that Table API and SQL queries have the same semantics regardless whether their input is bounded batch input or unbounded stream input.&#xA;The following pages explain concepts, practical limitations, and stream-specific configuration parameters of Flink&amp;rsquo;s relational APIs on streaming data.&#xA;State Management # Table programs that run in streaming mode leverage all capabilities of Flink as a stateful stream processor.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/functions/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/functions/overview/</guid>
      <description>Functions # Flink Table API &amp;amp; SQL empowers users to do data transformations with functions.&#xA;Types of Functions # There are two dimensions to classify functions in Flink.&#xA;One dimension is system (or built-in) functions v.s. catalog functions. System functions have no namespace and can be referenced with just their names. Catalog functions belong to a catalog and database therefore they have catalog and database namespaces, they can be referenced by either fully/partially qualified name (catalog.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/overview/</guid>
      <description>Hive Dialect # Flink allows users to write SQL statements in Hive syntax when Hive dialect is used. By providing compatibility with Hive syntax, we aim to improve the interoperability with Hive and reduce the scenarios when users need to switch between Flink and Hive in order to execute different statements.&#xA;Use Hive Dialect # Flink currently supports two SQL dialects: default and hive. You need to switch to Hive dialect before you can write in Hive syntax.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/overview/</guid>
      <description>Queries # Description # Hive dialect supports a commonly-used subset of Hive’s DQL. The following lists some parts of HiveQL supported by the Hive dialect.&#xA;Sort/Cluster/Distributed BY Group By Join Set Operation Lateral View Window Functions Sub-Queries CTE Transform Table Sample Syntax # The following section describes the overall query syntax. The SELECT clause can be part of a query which also includes common table expressions (CTE), set operations, and various other clauses.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/materialized-table/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/materialized-table/overview/</guid>
      <description>Introduction # Materialized Table is a new table type introduced in Flink SQL, aimed at simplifying both batch and stream data pipelines, providing a consistent development experience. By specifying data freshness and query when creating Materialized Table, the engine automatically derives the schema for the materialized table and creates corresponding data refresh pipeline to achieve the specified freshness.&#xA;Core Concepts # Materialized Table encompass the following core concepts: Data Freshness, Refresh Mode, Query Definition and Schema.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/overview/</guid>
      <description>Table API &amp;amp; SQL # Apache Flink features two relational APIs - the Table API and SQL - for unified stream and batch processing. The Table API is a language-integrated query API for Java, Scala, and Python that allows the composition of queries from relational operators such as selection, filter, and join in a very intuitive way. Flink&amp;rsquo;s SQL support is based on Apache Calcite which implements the SQL standard.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql-gateway/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql-gateway/overview/</guid>
      <description>Introduction # The SQL Gateway is a service that enables multiple clients from the remote to execute SQL in concurrency. It provides an easy way to submit the Flink Job, look up the metadata, and analyze the data online.&#xA;The SQL Gateway is composed of pluggable endpoints and the SqlGatewayService. The SqlGatewayService is a processor that is reused by the endpoints to handle the requests. The endpoint is an entry point that allows users to connect.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/overview/</guid>
      <description>Queries # SELECT statements and VALUES statements are specified with the sqlQuery() method of the TableEnvironment. The method returns the result of the SELECT statement (or the VALUES statements) as a Table. A Table can be used in subsequent SQL and Table API queries, be converted into a DataStream, or written to a TableSink. SQL and Table API queries can be seamlessly mixed and are holistically optimized and translated into a single program.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/overview/</guid>
      <description>Learn Flink: Hands-On Training # Goals and Scope of this Training # This training presents an introduction to Apache Flink that includes just enough to get you started writing scalable streaming ETL, analytics, and event-driven applications, while leaving out a lot of (ultimately important) details. The focus is on providing straightforward introductions to Flink’s APIs for managing state and time, with the expectation that having mastered these fundamentals, you’ll be much better equipped to pick up the rest of what you need to know from the more detailed reference documentation.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hive/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hive/overview/</guid>
      <description>Apache Hive # Apache Hive has established itself as a focal point of the data warehousing ecosystem. It serves as not only a SQL engine for big data analytics and ETL, but also a data management platform, where data is discovered, defined, and evolved.&#xA;Flink offers a two-fold integration with Hive.&#xA;The first is to leverage Hive&amp;rsquo;s Metastore as a persistent catalog with Flink&amp;rsquo;s HiveCatalog for storing Flink specific metadata across sessions.</description>
    </item>
    <item>
      <title>Set up Flink&#39;s Process Memory</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_setup/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_setup/</guid>
      <description>Set up Flink&amp;rsquo;s Process Memory # Apache Flink provides efficient workloads on top of the JVM by tightly controlling the memory usage of its various components. While the community strives to offer sensible defaults to all configurations, the full breadth of applications that users deploy on Flink means this isn&amp;rsquo;t always possible. To provide the most production value to our users, Flink allows both high-level and fine-grained tuning of memory allocation within clusters.</description>
    </item>
    <item>
      <title>SQL</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/overview/</guid>
      <description>SQL # This page describes the SQL language supported in Flink, including Data Definition Language (DDL), Data Manipulation Language (DML) and Query Language. Flink’s SQL support is based on Apache Calcite which implements the SQL standard.&#xA;This page lists all the supported statements supported in Flink SQL for now:&#xA;SELECT (Queries) CREATE TABLE, CATALOG, DATABASE, VIEW, FUNCTION DROP TABLE, DATABASE, VIEW, FUNCTION ALTER TABLE, DATABASE, FUNCTION ANALYZE TABLE INSERT UPDATE DELETE DESCRIBE EXPLAIN USE SHOW LOAD UNLOAD Data Types # Please see the dedicated page about data types.</description>
    </item>
    <item>
      <title>State TTL Migration Compatibility</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/state_migration/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/state_migration/</guid>
      <description>State TTL Migration Compatibility # Starting with Apache Flink 2.2.0, the system supports seamless enabling or disabling of State Time-to-Live (TTL) for existing state. This enhancement removes prior limitations where a change in TTL configuration could cause a StateMigrationException during restore.&#xA;Version Overview # Flink Version Change 2.0.0 Introduced TtlAwareSerializer to support TTL/non-TTL serializer compatibility 2.1.0 Added TTL migration support for RocksDBKeyedStateBackend 2.2.0 Added TTL migration support for HeapKeyedStateBackend Full TTL state migration support across all major state backends is available from Flink 2.</description>
    </item>
    <item>
      <title>Working with State</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/state/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/state/</guid>
      <description>Working with State # In this section you will learn about the APIs that Flink provides for writing stateful programs. Please take a look at Stateful Stream Processing to learn about the concepts behind stateful stream processing.&#xA;Keyed DataStream # If you want to use keyed state, you first need to specify a key on a DataStream that should be used to partition the state (and also the records in the stream themselves).</description>
    </item>
    <item>
      <title>Common Configurations</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/common/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/common/</guid>
      <description>Common Configurations # Apache Flink provides several standard configuration settings that work across all file system implementations.&#xA;Default File System # A default scheme (and authority) is used if paths to files do not explicitly specify a file system scheme (and authority).&#xA;fs.default-scheme: &amp;lt;default-fs&amp;gt; For example, if the default file system configured as fs.default-scheme: hdfs://localhost:9000/, then a file path of /user/hugo/in.txt is interpreted as hdfs://localhost:9000/user/hugo/in.txt.&#xA;Connection limiting # You can limit the total number of connections that a file system can concurrently open which is useful when the file system cannot handle a large number of concurrent reads/writes or open connections at the same time.</description>
    </item>
    <item>
      <title>Concepts &amp; Common API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/common/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/common/</guid>
      <description>Concepts &amp;amp; Common API # The Table API and SQL are integrated in a joint API. The central concept of this API is a Table which serves as input and output of queries. This document shows the common structure of programs with Table API and SQL queries, how to register a Table, how to query a Table, and how to emit a Table.&#xA;Structure of Table API and SQL Programs # The following code example shows the common structure of Table API and SQL programs.</description>
    </item>
    <item>
      <title>CREATE Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/create/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/create/</guid>
      <description>CREATE Statements # With Hive dialect, the following CREATE statements are supported for now:&#xA;CREATE DATABASE CREATE TABLE CREATE VIEW CREATE MARCO CREATE FUNCTION CREATE DATABASE # Description # CREATE DATABASE statement is used to create a database with the specified name.&#xA;Syntax # CREATE (DATABASE|SCHEMA) [IF NOT EXISTS] database_name [COMMENT database_comment] [LOCATION hdfs_path] [WITH DBPROPERTIES (property_name=property_value, ...)]; Examples # CREATE DATABASE db1; CREATE DATABASE IF NOT EXISTS db1 COMMENT &amp;#39;db1&amp;#39; LOCATION &amp;#39;/user/hive/warehouse/db1&amp;#39; WITH DBPROPERTIES (&amp;#39;name&amp;#39;=&amp;#39;example-db&amp;#39;); CREATE TABLE # Description # CREATE TABLE statement is used to define a table in an existing database.</description>
    </item>
    <item>
      <title>CSV</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/csv/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/csv/</guid>
      <description>CSV Format # Format: Serialization Schema Format: Deserialization Schema&#xA;The CSV format allows to read and write CSV data based on an CSV schema. Currently, the CSV schema is derived from table schema.&#xA;Dependencies # In order to use the CSV format the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.&#xA;Maven dependency SQL Client &amp;ltdependency&amp;gt &amp;ltgroupId&amp;gtorg.</description>
    </item>
    <item>
      <title>Debugging Windows &amp; Event Time</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/debugging_event_time/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/debugging_event_time/</guid>
      <description>Debugging Windows &amp;amp; Event Time # Monitoring Current Event Time # Flink&amp;rsquo;s event time and watermark support are powerful features for handling out-of-order events. However, it&amp;rsquo;s harder to understand what exactly is going on because the progress of time is tracked within the system.&#xA;Low watermarks of each task can be accessed through Flink web interface or metrics system.&#xA;Each Task in Flink exposes a metric called currentInputWatermark that represents the lowest watermark received by this task.</description>
    </item>
    <item>
      <title>Determinism in Continuous Queries</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/determinism/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/determinism/</guid>
      <description>Determinism In Continuous Queries # This article is about:&#xA;What is determinism? Is all batch processing deterministic? Two examples of batch queries with non-deterministic results Non-determinism in batch processing Determinism in streaming processing Non-determinism in streaming Non-deterministic update in streaming How to eliminate the impact of non-deterministic update in streaming queries 1. What Is Determinism? # Quoting the SQL standard&amp;rsquo;s description of determinism: &amp;lsquo;an operation is deterministic if that operation assuredly computes identical results when repeated with identical input values&amp;rsquo;.</description>
    </item>
    <item>
      <title>DROP Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/drop/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/drop/</guid>
      <description>CREATE Statements # With Hive dialect, the following DROP statements are supported for now:&#xA;DROP DATABASE DROP TABLE DROP VIEW DROP MARCO DROP FUNCTION DROP DATABASE # Description # DROP DATABASE statement is used to drop a database as well as the tables/directories associated with the database.&#xA;Syntax # DROP (DATABASE|SCHEMA) [IF EXISTS] database_name [RESTRICT|CASCADE]; The use of SCHEMA and DATABASE are interchangeable - they mean the same thing. The default behavior is RESTRICT, where DROP DATABASE will fail if the database is not empty.</description>
    </item>
    <item>
      <title>Dynamic Tables</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/dynamic_tables/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/dynamic_tables/</guid>
      <description>Dynamic Tables # SQL - and the Table API - offer flexible and powerful capabilities for real-time data processing. This page describes how relational concepts elegantly translate to streaming, allowing Flink to achieve the same semantics on unbounded streams.&#xA;Relational Queries on Data Streams # The following table compares traditional relational algebra and stream processing for input data, execution, and output results.&#xA;Relational Algebra / SQL Stream Processing Relations (or tables) are bounded (multi-)sets of tuples.</description>
    </item>
    <item>
      <title>Execution Mode (Batch/Streaming)</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/execution_mode/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/execution_mode/</guid>
      <description>Execution Mode (Batch/Streaming) # The DataStream API supports different runtime execution modes from which you can choose depending on the requirements of your use case and the characteristics of your job.&#xA;There is the &amp;ldquo;classic&amp;rdquo; execution behavior of the DataStream API, which we call STREAMING execution mode. This should be used for unbounded jobs that require continuous incremental processing and are expected to stay online indefinitely.&#xA;Additionally, there is a batch-style execution mode that we call BATCH execution mode.</description>
    </item>
    <item>
      <title>External Resources</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/external_resources/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/external_resources/</guid>
      <description>External Resource Framework # In addition to CPU and memory, many workloads also need some other resources, e.g. GPUs for deep learning. To support external resources, Flink provides an external resource framework. The framework supports requesting various types of resources from the underlying resource management systems (e.g., Kubernetes), and supplies information needed for using these resources to the operators. Different resource types can be supported. You can either leverage built-in plugins provided by Flink (currently only for GPU support), or implement your own plugins for custom resource types.</description>
    </item>
    <item>
      <title>Fault Tolerance Guarantees</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/guarantees/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/guarantees/</guid>
      <description>Fault Tolerance Guarantees of Data Sources and Sinks # Flink&amp;rsquo;s fault tolerance mechanism recovers programs in the presence of failures and continues to execute them. Such failures include machine hardware failures, network failures, transient program failures, etc.&#xA;Flink can guarantee exactly-once state updates to user-defined state only when the source participates in the snapshotting mechanism. The following table lists the state update guarantees of Flink coupled with the bundled connectors.</description>
    </item>
    <item>
      <title>Fraud Detection with the DataStream API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/try-flink/datastream/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/try-flink/datastream/</guid>
      <description>Fraud Detection with the DataStream API # Apache Flink offers a DataStream API for building robust, stateful streaming applications. It provides fine-grained control over state and time, which allows for the implementation of advanced event-driven systems. In this step-by-step guide you&amp;rsquo;ll learn how to build a stateful streaming application with Flink&amp;rsquo;s DataStream API.&#xA;What Are You Building? # Credit card fraud is a growing concern in the digital age. Criminals steal credit card numbers by running scams or hacking into insecure systems.</description>
    </item>
    <item>
      <title>Generating Watermarks</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/event-time/generating_watermarks/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/event-time/generating_watermarks/</guid>
      <description>Generating Watermarks # In this section you will learn about the APIs that Flink provides for working with event time timestamps and watermarks. For an introduction to event time, processing time, and ingestion time, please refer to the introduction to event time.&#xA;Introduction to Watermark Strategies # In order to work with event time, Flink needs to know the events timestamps, meaning each element in the stream needs to have its event timestamp assigned.</description>
    </item>
    <item>
      <title>Getting Started</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/gettingstarted/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/gettingstarted/</guid>
      <description>Getting Started # Flink SQL makes it simple to develop streaming applications using standard SQL. It is easy to learn Flink if you have ever worked with a database or SQL like system by remaining ANSI-SQL 2011 compliant. This tutorial will help you get started quickly with a Flink SQL development environment.&#xA;Prerequisites # You only need to have basic knowledge of SQL to follow along. No other programming experience is assumed.</description>
    </item>
    <item>
      <title>Hints</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/hints/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/hints/</guid>
      <description>SQL Hints # Batch Streaming&#xA;SQL hints can be used with SQL statements to alter execution plans. This chapter explains how to use hints to force various approaches.&#xA;Generally a hint can be used to:&#xA;Enforce planner: there&amp;rsquo;s no perfect planner, so it makes sense to implement hints to allow user better control the execution; Append meta data(or statistics): some statistics like “table index for scan” and “skew info of some shuffle keys” are somewhat dynamic for the query, it would be very convenient to config them with hints because our planning metadata from the planner is very often not that accurate; Operator resource constraints: for many cases, we would give a default resource configuration for the execution operators, i.</description>
    </item>
    <item>
      <title>Hive Catalog</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hive/hive_catalog/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hive/hive_catalog/</guid>
      <description>Hive Catalog # Hive Metastore has evolved into the de facto metadata hub over the years in Hadoop ecosystem. Many companies have a single Hive Metastore service instance in their production to manage all of their metadata, either Hive metadata or non-Hive metadata, as the source of truth.&#xA;For users who have both Hive and Flink deployments, HiveCatalog enables them to use Hive Metastore to manage Flink&amp;rsquo;s metadata.&#xA;For users who have just Flink deployment, HiveCatalog is the only persistent catalog provided out-of-box by Flink.</description>
    </item>
    <item>
      <title>Installation</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/installation/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/installation/</guid>
      <description>Installation # Environment Requirements # Python version (3.9, 3.10, 3.11 or 3.12) is required for PyFlink. Please run the following command to make sure that it meets the requirements: $ python --version # the version printed here must be 3.9, 3.10, 3.11 or 3.12 Environment Setup # Your system may include multiple Python versions, and thus also include multiple Python binary executables. You can run the following ls command to find out what Python binary executables are available in your system:</description>
    </item>
    <item>
      <title>Java Compatibility</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/java_compatibility/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/java_compatibility/</guid>
      <description>Java compatibility # This page lists which Java versions Flink supports and what limitations apply (if any).&#xA;Java 11 # Support for Java 11 was added in 1.10.0.&#xA;Untested Flink features # The following Flink features have not been tested with Java 11:&#xA;Hive connector Hbase 1.x connector Untested language features # Modularized user jars have not been tested. Java 17 # We use Java 17 by default in Flink 2.</description>
    </item>
    <item>
      <title>Monitoring Checkpointing</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/monitoring/checkpoint_monitoring/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/monitoring/checkpoint_monitoring/</guid>
      <description>Monitoring Checkpointing # Overview # Flink&amp;rsquo;s web interface provides a tab to monitor the checkpoints of jobs. These stats are also available after the job has terminated. There are four different tabs to display information about your checkpoints: Overview, History, Summary, and Configuration. The following sections will cover all of these in turn.&#xA;Monitoring # Overview Tab # The overview tabs lists the following statistics. Note that these statistics don&amp;rsquo;t survive a JobManager loss and are reset if your JobManager fails over.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/standalone/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/standalone/overview/</guid>
      <description>Standalone # Getting Started # This Getting Started section guides you through the local setup (on one machine, but in separate processes) of a Flink cluster. This can easily be expanded to set up a distributed standalone cluster, which we describe in the reference section.&#xA;Introduction # The standalone mode is the most barebone way of deploying Flink: The Flink services described in the deployment overview are just launched as processes on the operating system.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/overview/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/overview/</guid>
      <description>Note: DataStream API V2 is a new set of APIs, to gradually replace the original DataStream API. It is currently in the experimental stage and is not fully available for production. Flink DataStream API Programming Guide # DataStream programs in Flink are regular programs that implement transformations on data streams (e.g., filtering, updating state, defining windows, aggregating). The data streams are initially created from various sources (e.g., message queues, socket streams, files).</description>
    </item>
    <item>
      <title>REST Endpoint</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql-gateway/rest/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql-gateway/rest/</guid>
      <description>REST Endpoint # The REST endpoint allows user to connect to SQL Gateway with REST API.&#xA;Overview of SQL Processing # Open Session # When the client connects to the SQL Gateway, the SQL Gateway creates a Session as the context to store the users-specified information during the interactions between the client and SQL Gateway. After the creation of the Session, the SQL Gateway server returns an identifier named SessionHandle for later interactions.</description>
    </item>
    <item>
      <title>Sort/Cluster/Distributed By</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/sort-cluster-distribute-by/</guid>
      <description>Sort/Cluster/Distributed by Clause # Sort By # Description # Unlike ORDER BY which guarantees a total order of output, SORT BY only guarantees the result rows with each partition is in the user specified order. So when there&amp;rsquo;s more than one partition, SORT BY may return result that&amp;rsquo;s partially ordered.&#xA;Syntax # query: SELECT expression [ , ... ] FROM src sortBy sortBy: SORT BY expression colOrder [ , .</description>
    </item>
    <item>
      <title>SSL Setup</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/security/security-ssl/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/security/security-ssl/</guid>
      <description>SSL Setup # This page provides instructions on how to enable TLS/SSL authentication and encryption for network communication with and between Flink processes. NOTE: TLS/SSL authentication is not enabled by default.&#xA;Internal and External Connectivity # When securing network connections between machines processes through authentication and encryption, Apache Flink differentiates between internal and external connectivity. Internal Connectivity refers to all connections made between Flink processes. These connections run Flink custom protocols.</description>
    </item>
    <item>
      <title>State Processor API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/libs/state_processor_api/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/libs/state_processor_api/</guid>
      <description>State Processor API # Apache Flink&amp;rsquo;s State Processor API provides powerful functionality for reading, writing, and modifying savepoints and checkpoints using Flink’s DataStream API and Table API under BATCH execution. Due to the interoperability of DataStream and Table API, you can even use relational Table API or SQL queries to analyze and process state data.&#xA;For example, you can take a savepoint of a running stream processing application and analyze it with a DataStream batch program to verify that the application behaves correctly.</description>
    </item>
    <item>
      <title>State Schema Evolution</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/serialization/schema_evolution/</guid>
      <description>State Schema Evolution # Apache Flink streaming applications are typically designed to run indefinitely or for long periods of time. As with all long-running services, the applications need to be updated to adapt to changing requirements. This goes the same for data schemas that the applications work against; they evolve along with the application.&#xA;This page provides an overview of how you can evolve your state type&amp;rsquo;s data schema. The current restrictions varies across different types and state structures (ValueState, ListState, etc.</description>
    </item>
    <item>
      <title>Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/materialized-table/statements/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/materialized-table/statements/</guid>
      <description>Materialized Table Statements # Flink SQL supports the following Materialized Table statements for now:&#xA;CREATE MATERIALIZED TABLE ALTER MATERIALIZED TABLE DROP MATERIALIZED TABLE CREATE MATERIALIZED TABLE # CREATE MATERIALIZED TABLE [catalog_name.][db_name.]table_name [ ([ &amp;lt;table_constraint&amp;gt; ]) ] [COMMENT table_comment] [PARTITIONED BY (partition_column_name1, partition_column_name2, ...)] [WITH (key1=val1, key2=val2, ...)] [FRESHNESS = INTERVAL &amp;#39;&amp;lt;num&amp;gt;&amp;#39; { SECOND[S] | MINUTE[S] | HOUR[S] | DAY[S] }] [REFRESH_MODE = { CONTINUOUS | FULL }] AS &amp;lt;select_statement&amp;gt; &amp;lt;table_constraint&amp;gt;: [CONSTRAINT constraint_name] PRIMARY KEY (column_name, .</description>
    </item>
    <item>
      <title>Using Maven</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/maven/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/maven/</guid>
      <description>How to use Maven to configure your project # This guide will show you how to configure a Flink job project with Maven, an open-source build automation tool developed by the Apache Software Foundation that enables you to build, publish, and deploy projects. You can use it to manage the entire lifecycle of your software project.&#xA;Requirements # Maven 3.8.6 Java 11 Importing the project into your IDE # Once the project folder and files have been created, we recommend that you import this project into your IDE for developing and testing.</description>
    </item>
    <item>
      <title>Windows</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/windows/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/windows/</guid>
      <description>Windows # Windows are at the heart of processing infinite streams. Windows split the stream into &amp;ldquo;buckets&amp;rdquo; of finite size, over which we can apply computations. This document focuses on how windowing is performed in Flink and how the programmer can benefit to the maximum from its offered functionality.&#xA;The general structure of a windowed Flink program is presented below. The first snippet refers to keyed streams, while the second to non-keyed ones.</description>
    </item>
    <item>
      <title>Windows</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/operators/windows/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/operators/windows/</guid>
      <description> </description>
    </item>
    <item>
      <title>Working with State V2</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/state_v2/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/state_v2/</guid>
      <description>Working with State V2 (New APIs) # In this section you will learn about the new APIs that Flink provides for writing stateful programs. Please take a look at Stateful Stream Processing to learn about the concepts behind stateful stream processing.&#xA;The new state API is designed to be more flexible than the previous API. User can perform asynchronous state operations, thus making it more powerful and more efficient. The asynchronous state access is essential for the state backend to be able to handle large state sizes and to be able to spill to remote file systems when necessary.</description>
    </item>
    <item>
      <title>ZooKeeper HA Services</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/ha/zookeeper_ha/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/ha/zookeeper_ha/</guid>
      <description>ZooKeeper HA Services # Flink&amp;rsquo;s ZooKeeper HA services use ZooKeeper for high availability services.&#xA;Flink leverages ZooKeeper for distributed coordination between all running JobManager instances. ZooKeeper is a separate service from Flink, which provides highly reliable distributed coordination via leader election and light-weight consistent state storage. Check out ZooKeeper&amp;rsquo;s Getting Started Guide for more information about ZooKeeper. Flink includes scripts to bootstrap a simple ZooKeeper installation.&#xA;Configuration # In order to start an HA-cluster you have to configure the following configuration keys:</description>
    </item>
    <item>
      <title>ALTER Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/alter/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/alter/</guid>
      <description>ALTER Statements # With Hive dialect, the following ALTER statements are supported for now:&#xA;ALTER DATABASE ALTER TABLE ALTER VIEW ALTER DATABASE # Description # ALTER DATABASE statement is used to change the properties or location of a database.&#xA;Syntax # -- alter database&amp;#39;s properties ALTER (DATABASE|SCHEMA) database_name SET DBPROPERTIES (property_name=property_value, ...); -- alter database&amp;#39;s localtion ALTER (DATABASE|SCHEMA) database_name SET LOCATION hdfs_path; Synopsis # The uses of SCHEMA and DATABASE are interchangeable - they mean the same thing.</description>
    </item>
    <item>
      <title>Amazon S3</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/s3/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/s3/</guid>
      <description>Amazon S3 # Amazon Simple Storage Service (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for reading and writing data as well in conjunction with the streaming state backends.&#xA;You can use S3 objects like regular files by specifying paths in the following format:&#xA;s3://&amp;lt;your-bucket&amp;gt;/&amp;lt;endpoint&amp;gt; The endpoint can either be a single file or a directory, for example:&#xA;// Read from S3 bucket FileSource&amp;lt;String&amp;gt; fileSource = FileSource.</description>
    </item>
    <item>
      <title>Building Blocks</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/building_blocks/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/building_blocks/</guid>
      <description>Note: DataStream API V2 is a new set of APIs, to gradually replace the original DataStream API. It is currently in the experimental stage and is not fully available for production. Building Blocks # DataStream, Partitioning, ProcessFunction are the most fundamental elements of DataStream API and respectively represent:&#xA;What are the types of data streams&#xA;How data is partitioned&#xA;How to perform operations / processing on data streams&#xA;They are also the core parts of the fundamental primitives provided by DataStream API.</description>
    </item>
    <item>
      <title>Builtin Watermark Generators</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/event-time/built_in/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/event-time/built_in/</guid>
      <description>Builtin Watermark Generators # As described in Generating Watermarks, Flink provides abstractions that allow the programmer to assign their own timestamps and emit their own watermarks. More specifically, one can do so by implementing the WatermarkGenerator interface.&#xA;In order to further ease the programming effort for such tasks, Flink comes with some pre-implemented timestamp assigners. This section provides a list of them. Apart from their out-of-the-box functionality, their implementation can serve as an example for custom implementations.</description>
    </item>
    <item>
      <title>Configuration</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/config/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/config/</guid>
      <description>Configuration # All configuration can be set in Flink configuration file in the conf/ directory (see Flink Configuration File).&#xA;The configuration is parsed and evaluated when the Flink processes are started. Changes to the configuration file require restarting the relevant processes.&#xA;The out of the box configuration will use your default Java installation. You can manually set the environment variable JAVA_HOME or the configuration key env.java.home in Flink configuration file if you want to manually override the Java runtime to use.</description>
    </item>
    <item>
      <title>Custom State Serialization</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/serialization/custom_serialization/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/serialization/custom_serialization/</guid>
      <description>Custom Serialization for Managed State # This page is targeted as a guideline for users who require the use of custom serialization for their state, covering how to provide a custom state serializer as well as guidelines and best practices for implementing serializers that allow state schema evolution.&#xA;If you&amp;rsquo;re simply using Flink&amp;rsquo;s own serializers, this page is irrelevant and can be ignored.&#xA;Using custom state serializers # When registering a managed operator or keyed state, a StateDescriptor is required to specify the state&amp;rsquo;s name, as well as information about the type of the state.</description>
    </item>
    <item>
      <title>DataGen</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/datagen/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/datagen/</guid>
      <description>DataGen Connector # The DataGen connector provides a Source implementation that allows for generating input data for Flink pipelines. It is useful when developing locally or demoing without access to external systems such as Kafka. The DataGen connector is built-in, no additional dependencies are required.&#xA;Usage # The DataGeneratorSource produces N data points in parallel. The source splits the sequence into as many parallel sub-sequences as there are parallel source subtasks.</description>
    </item>
    <item>
      <title>DataStream API Integration</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/data_stream_api/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/data_stream_api/</guid>
      <description>DataStream API Integration # Both Table API and DataStream API are equally important when it comes to defining a data processing pipeline.&#xA;The DataStream API offers the primitives of stream processing (namely time, state, and dataflow management) in a relatively low-level imperative programming API. The Table API abstracts away many internals and provides a structured and declarative API.&#xA;Both APIs can work with bounded and unbounded streams.&#xA;Bounded streams need to be managed when processing historical data.</description>
    </item>
    <item>
      <title>Debugging Classloading</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/debugging_classloading/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/debugging_classloading/</guid>
      <description>Debugging Classloading # Overview of Classloading in Flink # When running Flink applications, the JVM will load various classes over time. These classes can be divided into three groups based on their origin:&#xA;The Java Classpath: This is Java&amp;rsquo;s common classpath, and it includes the JDK libraries, and all code in Flink&amp;rsquo;s /lib folder (the classes of Apache Flink and some dependencies). They are loaded by AppClassLoader.&#xA;The Flink Plugin Components: The plugins code in folders under Flink&amp;rsquo;s /plugins folder.</description>
    </item>
    <item>
      <title>Deployment</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/materialized-table/deployment/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/materialized-table/deployment/</guid>
      <description>Introduction # Creating and operating materialized tables involves multiple components&amp;rsquo; collaborative work. This document will systematically explain the complete deployment solution for Materialized Tables, covering architectural overview, environment preparation, deployment procedures, and operational practices.&#xA;Architecture Introduction # Client: Could be any client that can interact with Flink SQL Gateway, such as SQL Client, Flink JDBC Driver and so on. Flink SQL Gateway: Supports creating, altering, and dropping Materialized table. It also serves as an embedded workflow scheduler to periodically refresh full mode Materialized Table.</description>
    </item>
    <item>
      <title>Dynamic Kafka</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/dynamic-kafka/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/dynamic-kafka/</guid>
      <description>Dynamic Kafka Source Experimental # Flink provides an Apache Kafka connector for reading data from Kafka topics from one or more Kafka clusters. The Dynamic Kafka connector discovers the clusters and topics using a Kafka metadata service and can achieve reading in a dynamic fashion, facilitating changes in topics and/or clusters, without requiring a job restart. This is especially useful when you need to read a new Kafka cluster/topic and/or stop reading an existing Kafka cluster/topic (cluster migration/failover/other infrastructure changes) and when you need direct integration with Hybrid Source.</description>
    </item>
    <item>
      <title>Flame Graphs</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/flame_graphs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/flame_graphs/</guid>
      <description>Flame Graphs # Flame Graphs are a visualization that effectively surfaces answers to questions like:&#xA;Which methods are currently consuming CPU resources? How does consumption by one method compare to the others? Which series of calls on the stack led to executing a particular method? Flame Graph Flame Graphs are constructed by sampling stack traces a number of times. Each method call is presented by a bar, where the length of the bar is proportional to the number of times it is present in the samples.</description>
    </item>
    <item>
      <title>Google Cloud Storage</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/gcs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/gcs/</guid>
      <description>Google Cloud Storage # Google Cloud Storage (GCS) provides cloud storage for a variety of use cases. You can use it for reading and writing data, and for checkpoint storage when using FileSystemCheckpointStorage) with the streaming state backends.&#xA;You can use GCS objects like regular files by specifying paths in the following format:&#xA;gs://&amp;lt;your-bucket&amp;gt;/&amp;lt;endpoint&amp;gt; The endpoint can either be a single file or a directory, for example:&#xA;// Read from GCS bucket FileSource&amp;lt;String&amp;gt; fileSource = FileSource.</description>
    </item>
    <item>
      <title>Group By</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/group-by/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/group-by/</guid>
      <description>Group By Clause # Description # The Group by clause is used to compute a single result from multiple input rows with given aggregation function. Hive dialect also supports enhanced aggregation features to do multiple aggregations based on the same record by using ROLLUP/CUBE/GROUPING SETS.&#xA;Syntax # group_by_clause: group_by_clause_1 | group_by_clause_2 group_by_clause_1: GROUP BY group_expression [ , ... ] [ WITH ROLLUP | WITH CUBE ] group_by_clause_2: GROUP BY { group_expression | { ROLLUP | CUBE | GROUPING SETS } ( grouping_set [ , .</description>
    </item>
    <item>
      <title>History Server</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/historyserver/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/historyserver/</guid>
      <description>History Server # Flink has a history server that can be used to query the statistics of completed jobs after the corresponding Flink cluster has been shut down.&#xA;Furthermore, it exposes a REST API that accepts HTTP requests and responds with JSON data.&#xA;Overview # The HistoryServer allows you to query the status and statistics of completed jobs that have been archived by a JobManager.&#xA;After you have configured the HistoryServer and JobManager, you start and stop the HistoryServer via its corresponding startup script:</description>
    </item>
    <item>
      <title>HiveServer2 Endpoint</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql-gateway/hiveserver2/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql-gateway/hiveserver2/</guid>
      <description>HiveServer2 Endpoint # The Flink SQL Gateway supports deploying as a HiveServer2 Endpoint which is compatible with HiveServer2 wire protocol. This allows users to submit Hive-dialect SQL through the Flink SQL Gateway with existing Hive clients using Thrift or the Hive JDBC driver. These clients include Beeline, DBeaver, Apache Superset and so on.&#xA;It is recommended to use the HiveServer2 Endpoint with a Hive Catalog and Hive dialect to get the same experience as HiveServer2.</description>
    </item>
    <item>
      <title>INSERT Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/insert/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/insert/</guid>
      <description>INSERT Statements # INSERT TABLE # Description # The INSERT TABLE statement is used to insert rows into a table or overwrite the existing data in the table. The row to be inserted can be specified by value expressions or result from query.&#xA;Syntax # -- Stardard syntax INSERT { OVERWRITE TABLE | INTO [TABLE] } tablename [PARTITION (partcol1[=val1], partcol2[=val2] ...) [IF NOT EXISTS]] { VALUES ( value [, .</description>
    </item>
    <item>
      <title>Intro to the DataStream API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/datastream_api/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/datastream_api/</guid>
      <description>Intro to the DataStream API # The focus of this training is to broadly cover the DataStream API well enough that you will be able to get started writing streaming applications.&#xA;What can be Streamed? # Flink&amp;rsquo;s DataStream APIs will let you stream anything they can serialize. Flink&amp;rsquo;s own serializer is used for&#xA;basic types, i.e., String, Long, Integer, Boolean, Array composite types: Tuples, POJOs and Flink falls back to Kryo for other types.</description>
    </item>
    <item>
      <title>Joining</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/joining/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/joining/</guid>
      <description>Joining # Window Join # A window join joins the elements of two streams that share a common key and lie in the same window. These windows can be defined by using a window assigner and are evaluated on elements from both of the streams.&#xA;The elements from both sides are then passed to a user-defined JoinFunction or FlatJoinFunction where the user can emit results that meet the join criteria.</description>
    </item>
    <item>
      <title>JSON</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/json/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/json/</guid>
      <description>JSON Format # Format: Serialization Schema Format: Deserialization Schema&#xA;The JSON format allows to read and write JSON data based on an JSON schema. Currently, the JSON schema is derived from table schema.&#xA;The JSON format supports append-only streams, unless you&amp;rsquo;re using a connector that explicitly support retract streams and/or upsert streams like the Upsert Kafka connector. If you need to write retract streams and/or upsert streams, we suggest you to look at CDC JSON formats like Debezium JSON and Canal JSON.</description>
    </item>
    <item>
      <title>Kafka</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/kafka/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/kafka/</guid>
      <description>Apache Kafka Connector # Flink provides an Apache Kafka connector for reading data from and writing data to Kafka topics with exactly-once guarantees.&#xA;Dependency # Apache Flink ships with a universal Kafka connector which attempts to track the latest version of the Kafka client. The version of the client it uses may change between Flink releases. Modern Kafka clients are backwards compatible with broker versions 0.10.0 or later. For details on Kafka compatibility, please refer to the official Kafka documentation.</description>
    </item>
    <item>
      <title>Kafka</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/kafka/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/kafka/</guid>
      <description>Apache Kafka SQL Connector # Scan Source: Unbounded Sink: Streaming Append Mode&#xA;The Kafka connector allows for reading data from and writing data into Kafka topics.&#xA;Dependencies # There is no connector (yet) available for Flink version 2.2.&#xA;The Kafka connector is not part of the binary distribution. See how to link with it for cluster execution here.&#xA;How to create a Kafka table # The example below shows how to create a Kafka table:</description>
    </item>
    <item>
      <title>Kerberos</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/security/security-kerberos/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/security/security-kerberos/</guid>
      <description>Kerberos Authentication Setup and Configuration # This document briefly describes how Flink security works in the context of various deployment mechanisms (Standalone, native Kubernetes, YARN), filesystems, connectors, and state backends.&#xA;Objective # The primary goals of the Flink Kerberos security infrastructure are:&#xA;to enable secure data access for jobs within a cluster via connectors (e.g. Kafka) to authenticate to ZooKeeper (if configured to use SASL) to authenticate to Hadoop components (e.</description>
    </item>
    <item>
      <title>Kubernetes HA Services</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/ha/kubernetes_ha/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/ha/kubernetes_ha/</guid>
      <description>Kubernetes HA Services # Flink&amp;rsquo;s Kubernetes HA services use Kubernetes for high availability services.&#xA;Kubernetes high availability services can only be used when deploying to Kubernetes. Consequently, they can be configured when using standalone Flink on Kubernetes or the native Kubernetes integration&#xA;Prerequisites # In order to use Flink&amp;rsquo;s Kubernetes HA services you must fulfill the following prerequisites:&#xA;Kubernetes &amp;gt;= 1.9. Service account with permissions to create, edit, delete ConfigMaps.</description>
    </item>
    <item>
      <title>Monitoring Back Pressure</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/monitoring/back_pressure/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/monitoring/back_pressure/</guid>
      <description>Monitoring Back Pressure # Flink&amp;rsquo;s web interface provides a tab to monitor the back pressure behaviour of running jobs.&#xA;Back Pressure # If you see a back pressure warning (e.g. High) for a task, this means that it is producing data faster than the downstream operators can consume. Records in your job flow downstream (e.g. from sources to sinks) and back pressure is propagated in the opposite direction, up the stream.</description>
    </item>
    <item>
      <title>Native Kubernetes</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/native_kubernetes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/native_kubernetes/</guid>
      <description>Native Kubernetes # This page describes how to deploy Flink natively on Kubernetes.&#xA;Getting Started # This Getting Started section guides you through setting up a fully functional Flink Cluster on Kubernetes.&#xA;Introduction # Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management. Flink&amp;rsquo;s native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster. Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.</description>
    </item>
    <item>
      <title>Profiler</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/profiler/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/profiler/</guid>
      <description>Profiler # Since Flink 1.19, we support profiling the JobManager/TaskManager process interactively with async-profiler via Flink Web UI, which allows users to create a profiling instance with arbitrary intervals and event modes, e.g ITIMER, CPU, Lock, Wall-Clock and Allocation.&#xA;CPU: In this mode the profiler collects stack trace samples that include Java methods, native calls, JVM code and kernel functions. ALLOCATION: In allocation profiling mode, the top frame of every call trace is the class of the allocated object, and the counter is the heap pressure (the total size of allocated TLABs or objects outside TLAB).</description>
    </item>
    <item>
      <title>Set up TaskManager Memory</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_setup_tm/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_setup_tm/</guid>
      <description>Set up TaskManager Memory # The TaskManager runs user code in Flink. Configuring memory usage for your needs can greatly reduce Flink&amp;rsquo;s resource footprint and improve Job stability.&#xA;The further described memory configuration is applicable starting with the release version 1.10. If you upgrade Flink from earlier versions, check the migration guide because many changes were introduced with the 1.10 release.&#xA;This memory setup guide is relevant only for TaskManagers!</description>
    </item>
    <item>
      <title>Stateful Stream Processing</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/stateful-stream-processing/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/stateful-stream-processing/</guid>
      <description>Stateful Stream Processing # What is State? # While many operations in a dataflow simply look at one individual event at a time (for example an event parser), some operations remember information across multiple events (for example window operators). These operations are called stateful.&#xA;Some examples of stateful operations:&#xA;When an application searches for certain event patterns, the state will store the sequence of events encountered so far. When aggregating events per minute/hour/day, the state holds the pending aggregates.</description>
    </item>
    <item>
      <title>The Broadcast State Pattern</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/broadcast_state/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/broadcast_state/</guid>
      <description>The Broadcast State Pattern # In this section you will learn about how to use broadcast state in practise. Please refer to Stateful Stream Processing to learn about the concepts behind stateful stream processing.&#xA;Provided APIs # To show the provided APIs, we will start with an example before presenting their full functionality. As our running example, we will use the case where we have a stream of objects of different colors and shapes and we want to find pairs of objects of the same color that follow a certain pattern, e.</description>
    </item>
    <item>
      <title>Time Attributes</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/time_attributes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/time_attributes/</guid>
      <description>Time Attributes # Flink can process data based on different notions of time.&#xA;Processing time refers to the machine&amp;rsquo;s system time (also known as epoch time, e.g. Java&amp;rsquo;s System.currentTimeMillis()) that is executing the respective operation. Event time refers to the processing of streaming data based on timestamps that are attached to each row. The timestamps can encode when an event happened. For more information about time handling in Flink, see the introduction about event time and watermarks.</description>
    </item>
    <item>
      <title>Timely Stream Processing</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/time/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/time/</guid>
      <description>Timely Stream Processing # Introduction # Timely stream processing is an extension of stateful stream processing in which time plays some role in the computation. Among other things, this is the case when you do time series analysis, when doing aggregations based on certain time periods (typically called windows), or when you do event processing where the time when an event occurred is important.&#xA;In the following sections we will highlight some of the topics that you should consider when working with timely Flink Applications.</description>
    </item>
    <item>
      <title>Using Gradle</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/gradle/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/gradle/</guid>
      <description>How to use Gradle to configure your project # You will likely need a build tool to configure your Flink project. This guide will show you how to do so with Gradle, an open-source general-purpose build tool that can be used to automate tasks in the development process.&#xA;Requirements # Gradle 7.x Java 8 (deprecated) or Java 11 Importing the project into your IDE # Once the project folder and files have been created, we recommend that you import this project into your IDE for developing and testing.</description>
    </item>
    <item>
      <title>WITH clause</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/with/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/with/</guid>
      <description>WITH clause # Batch Streaming&#xA;WITH provides a way to write auxiliary statements for use in a larger query. These statements, which are often referred to as Common Table Expression (CTE), can be thought of as defining temporary views that exist just for one query.&#xA;The syntax of WITH statement is:&#xA;WITH &amp;lt;with_item_definition&amp;gt; [ , ... ] SELECT ... FROM ...; &amp;lt;with_item_defintion&amp;gt;: with_item_name (column_name[, ...n]) AS ( &amp;lt;select_query&amp;gt; ) The following example defines a common table expression orders_with_total and use it in a GROUP BY query.</description>
    </item>
    <item>
      <title>Working Directory</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/standalone/working_directory/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/standalone/working_directory/</guid>
      <description>Working Directory # Flink supports to configure a working directory (FLIP-198) for Flink processes (JobManager and TaskManager). The working directory is used by the processes to store information that can be recovered upon a process restart. The requirement for this to work is that the process is started with the same identity and has access to the volume on which the working directory is stored.&#xA;Configuring the Working Directory # The working directories for the Flink processes are:</description>
    </item>
    <item>
      <title>3rd Party Serializers</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/serialization/third_party_serializers/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/serialization/third_party_serializers/</guid>
      <description>3rd Party Serializers # If you use a custom type in your Flink program which cannot be serialized by the Flink type serializer, Flink falls back to using the generic Kryo serializer. You may register your own serializer or a serialization system like Google Protobuf or Apache Thrift with Kryo. To do that, simply register the type class and the serializer via the configuration option pipeline.serialization-config:&#xA;pipeline.serialization-config: - org.example.MyCustomType: {type: kryo, kryo-type: registered, class: org.</description>
    </item>
    <item>
      <title>Aliyun OSS</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/oss/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/oss/</guid>
      <description>Aliyun Object Storage Service (OSS) # OSS: Object Storage Service # Aliyun Object Storage Service (Aliyun OSS) is widely used, particularly popular among China’s cloud users, and it provides cloud object storage for a variety of use cases. You can use OSS with Flink for reading and writing data as well in conjunction with the streaming state backends&#xA;You can use OSS objects like regular files by specifying paths in the following format:</description>
    </item>
    <item>
      <title>Application Profiling &amp; Debugging</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/application_profiling/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/debugging/application_profiling/</guid>
      <description>Application Profiling &amp;amp; Debugging # Overview of Custom Logging with Apache Flink # Each standalone JobManager, TaskManager, HistoryServer, and ZooKeeper daemon redirects stdout and stderr to a file with a .out filename suffix and writes internal logging to a file with a .log suffix. Java options configured by the user in env.java.opts.all, env.java.opts.jobmanager, env.java.opts.taskmanager, env.java.opts.historyserver and env.java.opts.client can likewise define log files with use of the script variable FLINK_LOG_PREFIX and by enclosing the options in double quotes for late evaluation.</description>
    </item>
    <item>
      <title>Avro</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/avro/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/avro/</guid>
      <description>Avro format # Flink has built-in support for Apache Avro. This allows to easily read and write Avro data based on an Avro schema with Flink. The serialization framework of Flink is able to handle classes generated from Avro schemas. In order to use the Avro format the following dependencies are required for projects using a build automation tool (such as Maven or SBT).&#xA;&amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.apache.flink&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;flink-avro&amp;lt;/artifactId&amp;gt; &amp;lt;version&amp;gt;2.2.0&amp;lt;/version&amp;gt; &amp;lt;/dependency&amp;gt; In order to use the Avro format in PyFlink jobs, the following dependencies are required: PyFlink JAR Download See Python dependency management for more details on how to use JARs in PyFlink.</description>
    </item>
    <item>
      <title>Avro</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/avro/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/avro/</guid>
      <description>Avro Format # Format: Serialization Schema Format: Deserialization Schema&#xA;The Apache Avro format allows to read and write Avro data based on an Avro schema. Currently, the Avro schema is derived from table schema.&#xA;Dependencies # In order to use the Avro format the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.&#xA;Maven dependency SQL Client &amp;ltdependency&amp;gt &amp;ltgroupId&amp;gtorg.</description>
    </item>
    <item>
      <title>Azure Blob Storage</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/azure/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/azure/</guid>
      <description>Azure Blob Storage # Azure Blob Storage is a Microsoft-managed service providing cloud storage for a variety of use cases. You can use Azure Blob Storage with Flink for reading and writing data as well in conjunction with the streaming state backends&#xA;Flink supports accessing Azure Blob Storage using both wasb:// or abfs://.&#xA;Azure recommends using abfs:// for accessing ADLS Gen2 storage accounts even though wasb:// works through backward compatibility.</description>
    </item>
    <item>
      <title>Azure Table storage</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/azure_table_storage/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/azure_table_storage/</guid>
      <description>Azure Table Storage # This example is using the HadoopInputFormat wrapper to use an existing Hadoop input format implementation for accessing Azure&amp;rsquo;s Table Storage.&#xA;Download and compile the azure-tables-hadoop project. The input format developed by the project is not yet available in Maven Central, therefore, we have to build the project ourselves. Execute the following commands: git clone https://github.com/mooso/azure-tables-hadoop.git cd azure-tables-hadoop mvn clean install Setup a new Flink project using the quickstarts: curl https://flink.</description>
    </item>
    <item>
      <title>Batch Shuffle</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/batch/batch_shuffle/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/batch/batch_shuffle/</guid>
      <description>Batch Shuffle # Overview # Flink supports a batch execution mode in both DataStream API and Table / SQL for jobs executing across bounded input. In batch execution mode, Flink offers two modes for network exchanges: Blocking Shuffle and Hybrid Shuffle.&#xA;Blocking Shuffle is the default data exchange mode for batch executions. It persists all intermediate data, and can be consumed only after fully produced. Hybrid Shuffle is the next generation data exchange mode for batch executions.</description>
    </item>
    <item>
      <title>Cassandra</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/cassandra/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/cassandra/</guid>
      <description>Apache Cassandra Connector # This connector provides sinks that writes data into a Apache Cassandra database.&#xA;To use this connector, add the following dependency to your project:&#xA;&amp;ltdependency&amp;gt &amp;ltgroupId&amp;gtorg.apache.flink&amp;lt/groupId&amp;gt &amp;ltartifactId&amp;gtflink-connector-cassandra_2.12&amp;lt/artifactId&amp;gt &amp;ltversion&amp;gt2.2.0&amp;lt/version&amp;gt &amp;lt/dependency&amp;gt Copied to clipboard! Note that the streaming connectors are currently NOT part of the binary distribution. See how to link with them for cluster execution here.&#xA;Installing Apache Cassandra # There are multiple ways to bring up a Cassandra instance on local machine:</description>
    </item>
    <item>
      <title>Checkpointing</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/checkpointing/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/checkpointing/</guid>
      <description>Checkpointing # Every function and operator in Flink can be stateful (see working with state for details). Stateful functions store data across the processing of individual elements/events, making state a critical building block for any type of more elaborate operation.&#xA;In order to make state fault tolerant, Flink needs to checkpoint the state. Checkpoints allow Flink to recover state and positions in the streams to give the application the same semantics as a failure-free execution.</description>
    </item>
    <item>
      <title>Confluent Avro</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/avro-confluent/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/avro-confluent/</guid>
      <description>Confluent Avro Format # Format: Serialization Schema Format: Deserialization Schema&#xA;The Avro Schema Registry (avro-confluent) format allows you to read records that were serialized by the io.confluent.kafka.serializers.KafkaAvroSerializer and to write records that can in turn be read by the io.confluent.kafka.serializers.KafkaAvroDeserializer.&#xA;When reading (deserializing) a record with this format the Avro writer schema is fetched from the configured Confluent Schema Registry based on the schema version id encoded in the record while the reader schema is inferred from table schema.</description>
    </item>
    <item>
      <title>Context and State Processing</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/context_and_state_processing/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/context_and_state_processing/</guid>
      <description>Note: DataStream API V2 is a new set of APIs, to gradually replace the original DataStream API. It is currently in the experimental stage and is not fully available for production. Context and State Processing # Context # Unlike attributes like the name of process operation, some information(such as current key) can only be obtained when the process function is executed. In order to build a bridge between process functions and the execution engine, DataStream API provide a unified entrypoint called Runtime Context.</description>
    </item>
    <item>
      <title>CREATE Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/create/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/create/</guid>
      <description>CREATE Statements # CREATE statements are used to register a table/view/function into current or specified Catalog. A registered table/view/function can be used in SQL queries.&#xA;Flink SQL supports the following CREATE statements for now:&#xA;CREATE TABLE [CREATE OR] REPLACE TABLE CREATE CATALOG CREATE DATABASE CREATE VIEW CREATE FUNCTION CREATE MODEL Run a CREATE statement # Java CREATE statements can be executed with the executeSql() method of the TableEnvironment. The executeSql() method returns &amp;lsquo;OK&amp;rsquo; for a successful CREATE operation, otherwise will throw an exception.</description>
    </item>
    <item>
      <title>CSV</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/csv/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/csv/</guid>
      <description>CSV format # To use the CSV format you need to add the Flink CSV dependency to your project:&#xA;&amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.apache.flink&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;flink-csv&amp;lt;/artifactId&amp;gt; &amp;lt;version&amp;gt;2.2.0&amp;lt;/version&amp;gt; &amp;lt;/dependency&amp;gt; For PyFlink users, you could use it directly in your jobs.&#xA;Flink supports reading CSV files using CsvReaderFormat. The reader utilizes Jackson library and allows passing the corresponding configuration for the CSV schema and parsing options.&#xA;CsvReaderFormat can be initialized and used like this:&#xA;CsvReaderFormat&amp;lt;SomePojo&amp;gt; csvFormat = CsvReaderFormat.</description>
    </item>
    <item>
      <title>Data Pipelines &amp; ETL</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/etl/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/etl/</guid>
      <description>Data Pipelines &amp;amp; ETL # One very common use case for Apache Flink is to implement ETL (extract, transform, load) pipelines that take data from one or more sources, perform some transformations and/or enrichments, and then store the results somewhere. In this section we are going to look at how to use Flink&amp;rsquo;s DataStream API to implement this kind of application.&#xA;Note that Flink&amp;rsquo;s Table and SQL APIs are well suited for many ETL use cases.</description>
    </item>
    <item>
      <title>Delegation tokens</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/security/security-delegation-token/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/security/security-delegation-token/</guid>
      <description>Delegation Tokens # This document aims to explain and demystify delegation tokens as they are used by Flink. Before we into the details here is the high level architecture diagram:&#xA;What Are Delegation Tokens and Why Use Them? # Delegation tokens (DTs from now on) are authentication tokens used by some services to replace long-lived credentials. Many services in the Hadoop ecosystem have support for DTs, since they have some very desirable advantages over long-lived credentials:</description>
    </item>
    <item>
      <title>Docker</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/standalone/docker/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/standalone/docker/</guid>
      <description>Docker Setup # Getting Started # This Getting Started section guides you through the local setup (on one machine, but in separate containers) of a Flink cluster using Docker containers.&#xA;Introduction # Docker is a popular container runtime. There are official Docker images for Apache Flink available on Docker Hub. You can use the Docker images to deploy a Session or Application cluster on Docker. This page focuses on the setup of Flink on Docker and Docker Compose.</description>
    </item>
    <item>
      <title>Flink Architecture</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/flink-architecture/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/flink-architecture/</guid>
      <description>Flink Architecture # Flink is a distributed system and requires effective allocation and management of compute resources in order to execute streaming applications. It integrates with all common cluster resource managers such as Hadoop YARN and Kubernetes, but can also be set up to run as a standalone cluster or even as a library.&#xA;This section contains an overview of Flink’s architecture and describes how its main components interact to execute applications and recover from failures.</description>
    </item>
    <item>
      <title>Hadoop</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/hadoop/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/hadoop/</guid>
      <description>Hadoop formats # Project Configuration # Support for Hadoop is contained in the flink-hadoop-compatibility Maven module.&#xA;Add the following dependency to your pom.xml to use hadoop&#xA;&amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.apache.flink&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;flink-hadoop-compatibility&amp;lt;/artifactId&amp;gt; &amp;lt;version&amp;gt;2.2.0&amp;lt;/version&amp;gt; &amp;lt;/dependency&amp;gt; If you want to run your Flink application locally (e.g. from your IDE), you also need to add a hadoop-client dependency such as:&#xA;&amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.apache.hadoop&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;hadoop-client&amp;lt;/artifactId&amp;gt; &amp;lt;version&amp;gt;2.10.2&amp;lt;/version&amp;gt; &amp;lt;scope&amp;gt;provided&amp;lt;/scope&amp;gt; &amp;lt;/dependency&amp;gt; Using Hadoop InputFormats # To use Hadoop InputFormats with Flink the format must first be wrapped using either readHadoopFile or createHadoopInput of the HadoopInputs utility class.</description>
    </item>
    <item>
      <title>Hive Read &amp; Write</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hive/hive_read_write/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hive/hive_read_write/</guid>
      <description>Hive Read &amp;amp; Write # Using the HiveCatalog, Apache Flink can be used for unified BATCH and STREAM processing of Apache Hive Tables. This means Flink can be used as a more performant alternative to Hive’s batch engine, or to continuously read and write data into and out of Hive tables to power real-time data warehousing applications.&#xA;Reading # Flink supports reading data from Hive in both BATCH and STREAMING modes.</description>
    </item>
    <item>
      <title>Importing Flink into an IDE</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/flinkdev/ide_setup/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/flinkdev/ide_setup/</guid>
      <description>Importing Flink into an IDE # The sections below describe how to import the Flink project into an IDE for the development of Flink itself. For writing Flink programs, please refer to the Java API quickstart guides.&#xA;Whenever something is not working in your IDE, try with the Maven command line first (mvn clean package -DskipTests) as it might be your IDE that has a bug or is not properly set up.</description>
    </item>
    <item>
      <title>Join</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/join/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/join/</guid>
      <description>Join # Description # JOIN is used to combine rows from two relations based on join condition.&#xA;Syntax # Hive Dialect supports the following syntax for joining tables:&#xA;join_table: table_reference [ INNER ] JOIN table_factor [ join_condition ] | table_reference { LEFT | RIGHT | FULL } [ OUTER ] JOIN table_reference join_condition | table_reference LEFT SEMI JOIN table_reference [ ON expression ] | table_reference CROSS JOIN table_reference [ join_condition ] table_reference: table_factor | join_table table_factor: tbl_name [ alias ] | table_subquery alias | ( table_references ) join_condition: { ON expression | USING ( colName [, .</description>
    </item>
    <item>
      <title>JSON</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/json/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/json/</guid>
      <description>Json format # To use the JSON format you need to add the Flink JSON dependency to your project:&#xA;&amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.apache.flink&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;flink-json&amp;lt;/artifactId&amp;gt; &amp;lt;version&amp;gt;2.2.0&amp;lt;/version&amp;gt; &amp;lt;scope&amp;gt;provided&amp;lt;/scope&amp;gt; &amp;lt;/dependency&amp;gt; For PyFlink users, you could use it directly in your jobs.&#xA;Flink supports reading/writing JSON records via the JsonSerializationSchema/JsonDeserializationSchema. These utilize the Jackson library, and support any type that is supported by Jackson, including, but not limited to, POJOs and ObjectNode.&#xA;The JsonDeserializationSchema can be used with any connector that supports the DeserializationSchema.</description>
    </item>
    <item>
      <title>Load Data Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/load-data/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/load-data/</guid>
      <description>Load Data Statements # Description # The LOAD DATA statement is used to load the data into a Hive table from the user specified directory or file. The load operation are currently pure copy/move operations that move data files into locations corresponding to Hive tables.&#xA;Syntax # LOAD DATA [LOCAL] INPATH &amp;#39;filepath&amp;#39; [OVERWRITE] INTO TABLE tablename [PARTITION (partcol1=val1, partcol2=val2 ...)]; Parameters # filepath&#xA;The filepath can be:&#xA;a relative path, such as warehouse/data1 an absolute path, such as /user/hive/warehouse/data1 a full URL with schema and (optionally) an authority, such as hdfs://namenode:9000/user/hive/warehouse/data1 The filepath can refer to a file (in which case, only the single file is loaded) or it can be a directory (in which case, all the files from the directory are loaded).</description>
    </item>
    <item>
      <title>Logging</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/logging/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/logging/</guid>
      <description>How to use logging # All Flink processes create a log text file that contains messages for various events happening in that process. These logs provide deep insights into the inner workings of Flink, and can be used to detect problems (in the form of WARN/ERROR messages) and can help in debugging them.&#xA;The log files can be accessed via the Job-/TaskManager pages of the WebUI. The used Resource Provider (e.</description>
    </item>
    <item>
      <title>Parquet</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/parquet/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/parquet/</guid>
      <description>Parquet format # Flink supports reading Parquet files, producing Flink RowData and producing Avro records. To use the format you need to add the flink-parquet dependency to your project:&#xA;&amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.apache.flink&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;flink-parquet&amp;lt;/artifactId&amp;gt; &amp;lt;version&amp;gt;2.2.0&amp;lt;/version&amp;gt; &amp;lt;/dependency&amp;gt; To read Avro records, you will need to add the parquet-avro dependency:&#xA;&amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.apache.parquet&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;parquet-avro&amp;lt;/artifactId&amp;gt; &amp;lt;version&amp;gt;1.12.2&amp;lt;/version&amp;gt; &amp;lt;optional&amp;gt;true&amp;lt;/optional&amp;gt; &amp;lt;exclusions&amp;gt; &amp;lt;exclusion&amp;gt; &amp;lt;groupId&amp;gt;org.apache.hadoop&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;hadoop-client&amp;lt;/artifactId&amp;gt; &amp;lt;/exclusion&amp;gt; &amp;lt;exclusion&amp;gt; &amp;lt;groupId&amp;gt;it.unimi.dsi&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;fastutil&amp;lt;/artifactId&amp;gt; &amp;lt;/exclusion&amp;gt; &amp;lt;/exclusions&amp;gt; &amp;lt;/dependency&amp;gt; In order to use the Parquet format in PyFlink jobs, the following dependencies are required: PyFlink JAR Download See Python dependency management for more details on how to use JARs in PyFlink.</description>
    </item>
    <item>
      <title>Process Function</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/process_function/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/process_function/</guid>
      <description>Process Function # The ProcessFunction # The ProcessFunction is a low-level stream processing operation, giving access to the basic building blocks of all (acyclic) streaming applications:&#xA;events (stream elements) state (fault-tolerant, consistent, only on keyed stream) timers (event time and processing time, only on keyed stream) The ProcessFunction can be thought of as a FlatMapFunction with access to keyed state and timers. It handles events by being invoked for each event received in the input stream(s).</description>
    </item>
    <item>
      <title>Process Function</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/operators/process_function/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/operators/process_function/</guid>
      <description>Process Function # ProcessFunction # The ProcessFunction is a low-level stream processing operation, giving access to the basic building blocks of all (acyclic) streaming applications:&#xA;events (stream elements) state (fault-tolerant, consistent, only on keyed stream) timers (event time and processing time, only on keyed stream) The ProcessFunction can be thought of as a FlatMapFunction with access to keyed state and timers. It handles events by being invoked for each event received in the input stream(s).</description>
    </item>
    <item>
      <title>Protobuf</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/protobuf/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/protobuf/</guid>
      <description>Protobuf Format # Format: Serialization Schema Format: Deserialization Schema&#xA;The Protocol Buffers Protobuf format allows you to read and write Protobuf data, based on Protobuf generated classes.&#xA;Dependencies # In order to use the Protobuf format the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.&#xA;Maven dependency SQL Client &amp;ltdependency&amp;gt &amp;ltgroupId&amp;gtorg.apache.flink&amp;lt/groupId&amp;gt &amp;ltartifactId&amp;gtflink-protobuf&amp;lt/artifactId&amp;gt &amp;ltversion&amp;gt2.2.0&amp;lt/version&amp;gt &amp;lt/dependency&amp;gt Copied to clipboard!</description>
    </item>
    <item>
      <title>Quickstart</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/materialized-table/quickstart/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/materialized-table/quickstart/</guid>
      <description>Quickstart Guide # This guide will help you quickly understand and get started with materialized tables. It includes setting up the environment and creating, altering, and dropping materialized tables in CONTINUOUS and FULL mode.&#xA;Environment Setup # Directory Preparation # Replace the example paths below with real paths on your machine.&#xA;Create directories for Catalog Store and test-filesystem Catalog: # Directory for File Catalog Store to save catalog information mkdir -p {catalog_store_path} # Directory for test-filesystem Catalog to save table metadata and table data mkdir -p {catalog_path} # Directory for the default database of test-filesystem Catalog mkdir -p {catalog_path}/mydb Create directories for Checkpoints and Savepoints: mkdir -p {checkpoints_path} mkdir -p {savepoints_path} Resource Preparation # The method here is similar to the steps recorded in local installation.</description>
    </item>
    <item>
      <title>Real Time Reporting with the Table API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/try-flink/table_api/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/try-flink/table_api/</guid>
      <description>Real Time Reporting with the Table API # Apache Flink offers a Table API as a unified, relational API for batch and stream processing, i.e., queries are executed with the same semantics on unbounded, real-time streams or bounded, batch data sets and produce the same results. The Table API in Flink is commonly used to ease the definition of data analytics, data pipelining, and ETL applications.&#xA;What Will You Be Building?</description>
    </item>
    <item>
      <title>Recovery job progress from job master failures</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/batch/recovery_from_job_master_failure/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/batch/recovery_from_job_master_failure/</guid>
      <description>Batch jobs progress recovery from job master failures # Background # Previously, if the JobMaster fails and is terminated, one of the following two situations will occur:&#xA;If high availability (HA) is disabled, the job will fail. If HA is enabled, a JobMaster failover will happen and the job will be restarted. Streaming jobs can resume from the latest successful checkpoints. Batch jobs, however, do not have checkpoints and have to start over from the beginning, losing all previously made progress.</description>
    </item>
    <item>
      <title>SELECT &amp; WHERE</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/select/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/select/</guid>
      <description>SELECT &amp;amp; WHERE clause # Batch Streaming&#xA;The general syntax of the SELECT statement is:&#xA;SELECT select_list FROM table_expression [ WHERE boolean_expression ] The table_expression refers to any source of data. It could be an existing table, view, VALUES, or VALUE clause, the joined results of multiple existing tables, or a subquery. Assuming that the table is available in the catalog, the following would read all rows from Orders.</description>
    </item>
    <item>
      <title>Set up JobManager Memory</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_setup_jobmanager/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_setup_jobmanager/</guid>
      <description>Set up JobManager Memory # The JobManager is the controlling element of the Flink Cluster. It consists of three distinct components: Resource Manager, Dispatcher and one JobMaster per running Flink Job. This guide walks you through high level and fine-grained memory configurations for the JobManager.&#xA;The further described memory configuration is applicable starting with the release version 1.11. If you upgrade Flink from earlier versions, check the migration guide because many changes were introduced with the 1.</description>
    </item>
    <item>
      <title>Text files</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/text_files/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/formats/text_files/</guid>
      <description>Text files format # Flink supports reading from text lines from a file using TextLineInputFormat. This format uses Java&amp;rsquo;s built-in InputStreamReader to decode the byte stream using various supported charset encodings. To use the format you need to add the Flink Connector Files dependency to your project:&#xA;&amp;lt;dependency&amp;gt; &amp;lt;groupId&amp;gt;org.apache.flink&amp;lt;/groupId&amp;gt; &amp;lt;artifactId&amp;gt;flink-connector-files&amp;lt;/artifactId&amp;gt; &amp;lt;version&amp;gt;2.2.0&amp;lt;/version&amp;gt; &amp;lt;/dependency&amp;gt; For PyFlink users, you could use it directly in your jobs.&#xA;This format is compatible with the new Source that can be used in both batch and streaming modes.</description>
    </item>
    <item>
      <title>Upsert Kafka</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/upsert-kafka/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/upsert-kafka/</guid>
      <description>Upsert Kafka SQL Connector # Scan Source: Unbounded Sink: Streaming Upsert Mode&#xA;The Upsert Kafka connector allows for reading data from and writing data into Kafka topics in the upsert fashion.&#xA;As a source, the upsert-kafka connector produces a changelog stream, where each data record represents an update or delete event. More precisely, the value in a data record is interpreted as an UPDATE of the last value for the same key, if any (if a corresponding key doesn’t exist yet, the update will be considered an INSERT).</description>
    </item>
    <item>
      <title>Versioned Tables</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/versioned_tables/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/versioned_tables/</guid>
      <description>Versioned Tables # Flink SQL operates over dynamic tables that evolve, which may either be append-only or updating. Versioned tables represent a special type of updating table that remembers the past values for each key.&#xA;Concept # Dynamic tables define relations over time. Often, particularly when working with metadata, a key&amp;rsquo;s old value does not become irrelevant when it changes.&#xA;Flink SQL can define versioned tables over any dynamic table with a PRIMARY KEY constraint and time attribute.</description>
    </item>
    <item>
      <title>Adaptive Batch</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/adaptive_batch/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/adaptive_batch/</guid>
      <description>Adaptive Batch Execution # This document describes the background, usage, and limitations of adaptive batch execution.&#xA;Background # In the traditional Flink batch job execution process, the execution plan of a job is determined before submission. To optimize the execution plan, users and Flink&amp;rsquo;s static execution plan optimizer need to understand the job logic and accurately evaluate how the job will execute, including the data characteristics processed by each node and the data distribution of the connecting edges.</description>
    </item>
    <item>
      <title>Async I/O</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/asyncio/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/asyncio/</guid>
      <description>Asynchronous I/O for External Data Access # This page explains the use of Flink&amp;rsquo;s API for asynchronous I/O with external data stores. For users not familiar with asynchronous or event-driven programming, an article about Futures and event-driven programming may be useful preparation.&#xA;Note: Details about the design and implementation of the asynchronous I/O utility can be found in the proposal and design document FLIP-12: Asynchronous I/O Design and Implementation. Details about the new retry support can be found in document FLIP-232: Add Retry Support For Async I/O In DataStream API.</description>
    </item>
    <item>
      <title>Balanced Tasks Scheduling</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/tasks-scheduling/balanced_tasks_scheduling/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/tasks-scheduling/balanced_tasks_scheduling/</guid>
      <description>Balanced Tasks Scheduling # This page describes the background and principle of balanced tasks scheduling, how to use it when running streaming jobs.&#xA;Background # When the parallelism of all vertices within a Flink streaming job is inconsistent, the default strategy of Flink to deploy tasks sometimes leads some TaskManagers have more tasks while others have fewer tasks, resulting in excessive resource utilization at some TaskManagers that contain more tasks and becoming a bottleneck for the entire job processing.</description>
    </item>
    <item>
      <title>Command-Line Interface</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/cli/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/cli/</guid>
      <description>Command-Line Interface # Flink provides a Command-Line Interface (CLI) bin/flink to run programs that are packaged as JAR files and to control their execution. The CLI is part of any Flink setup, available in local single node setups and in distributed setups. It connects to the running JobManager specified in Flink configuration file.&#xA;Job Lifecycle Management # A prerequisite for the commands listed in this section to work is to have a running Flink deployment like Kubernetes, YARN or any other option available.</description>
    </item>
    <item>
      <title>Connectors and Formats</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/connector/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/connector/</guid>
      <description>Connectors and Formats # Flink applications can read from and write to various external systems via connectors. It supports multiple formats in order to encode and decode data to match Flink&amp;rsquo;s data structures.&#xA;An overview of available connectors and formats is available for both DataStream and Table API/SQL.&#xA;Available artifacts # In order to use connectors and formats, you need to make sure Flink has access to the artifacts implementing them.</description>
    </item>
    <item>
      <title>Debezium</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/debezium/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/debezium/</guid>
      <description>Debezium Format # Changelog-Data-Capture Format Format: Serialization Schema Format: Deserialization Schema&#xA;Debezium is a CDC (Changelog Data Capture) tool that can stream changes in real-time from MySQL, PostgreSQL, Oracle, Microsoft SQL Server and many other databases into Kafka. Debezium provides a unified format schema for changelog and supports to serialize messages using JSON and Apache Avro.&#xA;Flink supports to interpret Debezium JSON and Avro messages as INSERT/UPDATE/DELETE messages into Flink SQL system.</description>
    </item>
    <item>
      <title>DROP Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/drop/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/drop/</guid>
      <description>DROP Statements # DROP statements are used to remove a catalog with the given catalog name or to remove a registered table/view/function from the current or specified Catalog.&#xA;Flink SQL supports the following DROP statements for now:&#xA;DROP CATALOG DROP TABLE DROP DATABASE DROP VIEW DROP FUNCTION DROP MODEL Run a DROP statement # Java DROP statements can be executed with the executeSql() method of the TableEnvironment. The executeSql() method returns &amp;lsquo;OK&amp;rsquo; for a successful DROP operation, otherwise will throw an exception.</description>
    </item>
    <item>
      <title>DynamoDB</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/dynamodb/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/dynamodb/</guid>
      <description>Amazon DynamoDB Connector # The DynamoDB connector allows users to read/write from Amazon DynamoDB.&#xA;As a source, the connector allows users to read change data capture stream from DynamoDB tables using Amazon DynamoDB Streams.&#xA;As a sink, the connector allows users to write directly to Amazon DynamoDB tables using the BatchWriteItem API.&#xA;Dependency # Apache Flink ships the connector for users to utilize.&#xA;To use the connector, add the following Maven dependency to your project:</description>
    </item>
    <item>
      <title>DynamoDB</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/dynamodb/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/dynamodb/</guid>
      <description>Amazon DynamoDB SQL Connector # Sink: Batch Sink: Streaming Append &amp;amp; Upsert Mode&#xA;The DynamoDB connector allows for writing data into Amazon DynamoDB.&#xA;Dependencies # There is no connector (yet) available for Flink version 2.2.&#xA;How to create a DynamoDB table # Follow the instructions from the Amazon DynamoDB Developer Guide to set up a DynamoDB table. The following example shows how to create a table backed by a DynamoDB table with minimum required options:</description>
    </item>
    <item>
      <title>Elastic Scaling</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/elastic_scaling/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/elastic_scaling/</guid>
      <description>Elastic Scaling # Historically, the parallelism of a job has been static throughout its lifecycle and defined once during its submission. Batch jobs couldn&amp;rsquo;t be rescaled at all, while Streaming jobs could have been stopped with a savepoint and restarted with a different parallelism.&#xA;This page describes a new class of schedulers that allow Flink to adjust job&amp;rsquo;s parallelism at runtime, which pushes Flink one step closer to a truly cloud-native stream processor.</description>
    </item>
    <item>
      <title>Elasticsearch</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/elasticsearch/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/elasticsearch/</guid>
      <description>Elasticsearch Connector # This connector provides sinks that can request document actions to an Elasticsearch Index. To use this connector, add one of the following dependencies to your project, depending on the version of the Elasticsearch installation:&#xA;Elasticsearch version Maven Dependency 6.x There is no connector (yet) available for Flink version 2.2.&#xA;7.x There is no connector (yet) available for Flink version 2.2.&#xA;In order to use the in PyFlink jobs, the following dependencies are required: Version PyFlink JAR flink-connector-elasticsearch6 There is no SQL jar (yet) available for Flink version 2.</description>
    </item>
    <item>
      <title>Fine-Grained Resource Management</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/finegrained_resource/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/finegrained_resource/</guid>
      <description>Fine-Grained Resource Management # Apache Flink works hard to auto-derive sensible default resource requirements for all applications out of the box. For users who wish to fine-tune their resource consumption, based on knowledge of their specific scenarios, Flink offers fine-grained resource management.&#xA;This page describes the fine-grained resource management’s usage, applicable scenarios, and how it works.&#xA;Note: This feature is currently an MVP (“minimum viable product”) feature and only available to DataStream API.</description>
    </item>
    <item>
      <title>Firehose</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/firehose/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/firehose/</guid>
      <description>Amazon Kinesis Data Firehose Sink # The Firehose sink writes to Amazon Kinesis Data Firehose.&#xA;Follow the instructions from the Amazon Kinesis Data Firehose Developer Guide to setup a Kinesis Data Firehose delivery stream.&#xA;To use the connector, add the following Maven dependency to your project:&#xA;There is no connector (yet) available for Flink version 2.2.&#xA;In order to use the in PyFlink jobs, the following dependencies are required: Version PyFlink JAR flink-connector-aws-kinesis-firehose There is no SQL jar (yet) available for Flink version 2.</description>
    </item>
    <item>
      <title>Firehose</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/firehose/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/firehose/</guid>
      <description>Amazon Kinesis Data Firehose SQL Connector # Sink: Batch Sink: Streaming Append Mode&#xA;The Kinesis Data Firehose connector allows for writing data into Amazon Kinesis Data Firehose (KDF).&#xA;Dependencies # There is no connector (yet) available for Flink version 2.2.&#xA;How to create a Kinesis Data Firehose table # Follow the instructions from the Amazon Kinesis Data Firehose Developer Guide to set up a Kinesis Data Firehose delivery stream. The following example shows how to create a table backed by a Kinesis Data Firehose delivery stream with minimum required options:</description>
    </item>
    <item>
      <title>Full Window Partition</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/full_window_partition/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/operators/full_window_partition/</guid>
      <description>Full Window Partition Processing on DataStream # This page explains the use of full window partition processing API on DataStream. Flink enables both keyed and non-keyed DataStream to directly transform into PartitionWindowedStream now. The PartitionWindowedStream represents collecting all records of each subtask separately into a full window. The PartitionWindowedStream support four APIs: mapPartition, sortPartition, aggregate and reduce.&#xA;Note: Details about the design and implementation of the full window partition processing can be found in the proposal and design document FLIP-380: Support Full Partition Processing On Non-keyed DataStream.</description>
    </item>
    <item>
      <title>General User-defined Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/udfs/python_udfs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/udfs/python_udfs/</guid>
      <description>General User-defined Functions # User-defined functions are important features, because they significantly extend the expressiveness of Python Table API programs.&#xA;NOTE: Python UDF execution requires Python version (3.9, 3.10, 3.11 or 3.12) with PyFlink installed. It&amp;rsquo;s required on both the client side and the cluster side.&#xA;Scalar Functions # It supports to use Python scalar functions in Python Table API programs. In order to define a Python scalar function, one can extend the base class ScalarFunction in pyflink.</description>
    </item>
    <item>
      <title>Glossary</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/glossary/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/concepts/glossary/</guid>
      <description>Glossary # Checkpoint Storage # The location where the State Backend will store its snapshot during a checkpoint (Java Heap of JobManager or Filesystem).&#xA;Flink Application Cluster # A Flink Application Cluster is a dedicated Flink Cluster that only executes Flink Jobs from one Flink Application. The lifetime of the Flink Cluster is bound to the lifetime of the Flink Application.&#xA;Flink Job Cluster # A Flink Job Cluster is a dedicated Flink Cluster that only executes a single Flink Job.</description>
    </item>
    <item>
      <title>Hive Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hive/hive_functions/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hive/hive_functions/</guid>
      <description>Hive Functions # Use Hive Built-in Functions via HiveModule # The HiveModule provides Hive built-in functions as Flink system (built-in) functions to Flink SQL and Table API users.&#xA;For detailed information, please refer to HiveModule.&#xA;Java String name = &amp;#34;myhive&amp;#34;; String version = &amp;#34;2.3.4&amp;#34;; tableEnv.loadModue(name, new HiveModule(version)); Scala val name = &amp;#34;myhive&amp;#34; val version = &amp;#34;2.3.4&amp;#34; tableEnv.loadModue(name, new HiveModule(version)); Python from pyflink.table.module import HiveModule name = &amp;#34;myhive&amp;#34; version = &amp;#34;2.</description>
    </item>
    <item>
      <title>Jobs and Scheduling</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/internals/job_scheduling/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/internals/job_scheduling/</guid>
      <description>Jobs and Scheduling # This document briefly describes how Flink schedules jobs and how it represents and tracks job status on the JobManager.&#xA;Scheduling # Execution resources in Flink are defined through Task Slots. Each TaskManager will have one or more task slots, each of which can run one pipeline of parallel tasks. A pipeline consists of multiple successive tasks, such as the n-th parallel instance of a MapFunction together with the n-th parallel instance of a ReduceFunction.</description>
    </item>
    <item>
      <title>Kinesis</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/kinesis/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/kinesis/</guid>
      <description>Amazon Kinesis Data Streams Connector # The Kinesis connector allows users to read from and write to Amazon Kinesis Data Streams.&#xA;Dependency # To use this connector, add the below dependency to your project:&#xA;There is no connector (yet) available for Flink version 2.2.&#xA;For use in PyFlink jobs, use the following dependency:&#xA;In order to use the in PyFlink jobs, the following dependencies are required: Version PyFlink JAR flink-connector-aws-kinesis-streams There is no SQL jar (yet) available for Flink version 2.</description>
    </item>
    <item>
      <title>Kinesis</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/kinesis/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/kinesis/</guid>
      <description>Amazon Kinesis Data Streams SQL Connector # Scan Source: Unbounded Sink: Batch Sink: Streaming Append Mode&#xA;The Kinesis connector allows for reading data from and writing data into Amazon Kinesis Data Streams (KDS).&#xA;Dependencies # There is no connector (yet) available for Flink version 2.2.&#xA;The Kinesis connector is not part of the binary distribution. See how to link with it for cluster execution here.&#xA;Versioning # There are two available Table API and SQL distributions for the Kinesis connector.</description>
    </item>
    <item>
      <title>Kubernetes</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/standalone/kubernetes/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/standalone/kubernetes/</guid>
      <description>Kubernetes Setup # Getting Started # This Getting Started guide describes how to deploy a Session cluster on Kubernetes.&#xA;Introduction # This page describes deploying a standalone Flink cluster on top of Kubernetes, using Flink&amp;rsquo;s standalone deployment. We generally recommend new users to deploy Flink on Kubernetes using native Kubernetes deployments.&#xA;Apache Flink also provides a Kubernetes operator for managing Flink clusters on Kubernetes. It supports both standalone and native deployment mode and greatly simplifies deployment, configuration and the life cycle management of Flink resources on Kubernetes.</description>
    </item>
    <item>
      <title>Memory Tuning Guide</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_tuning/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_tuning/</guid>
      <description>Memory tuning guide # In addition to the main memory setup guide, this section explains how to set up memory depending on the use case and which options are important for each case.&#xA;Configure memory for standalone deployment # It is recommended to configure total Flink memory (taskmanager.memory.flink.size or jobmanager.memory.flink.size) or its components for standalone deployment where you want to declare how much memory is given to Flink itself. Additionally, you can adjust JVM metaspace if it causes problems.</description>
    </item>
    <item>
      <title>Metrics</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/metrics/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/metrics/</guid>
      <description>Metrics # Flink exposes a metric system that allows gathering and exposing metrics to external systems.&#xA;Registering metrics # You can access the metric system from any user function that extends RichFunction by calling getRuntimeContext().getMetricGroup(). This method returns a MetricGroup object on which you can create and register new metrics.&#xA;Metric types # Flink supports Counters, Gauges, Histograms and Meters.&#xA;Counter # A Counter is used to count something. The current value can be in- or decremented using inc()/inc(long n) or dec()/dec(long n).</description>
    </item>
    <item>
      <title>MongoDB</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/mongodb/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/mongodb/</guid>
      <description>MongoDB Connector # Flink provides a MongoDB connector for reading and writing data from and to MongoDB collections with at-least-once guarantees.&#xA;To use this connector, add one of the following dependencies to your project.&#xA;There is no connector (yet) available for Flink version 2.2.&#xA;MongoDB Source # The example below shows how to configure and create a source:&#xA;Java import org.apache.flink.api.common.eventtime.WatermarkStrategy; import org.apache.flink.api.common.typeinfo.BasicTypeInfo; import org.apache.flink.api.common.typeinfo.TypeInformation; import org.apache.flink.connector.mongodb.source.MongoSource; import org.apache.flink.connector.mongodb.source.reader.deserializer.MongoDeserializationSchema; import org.</description>
    </item>
    <item>
      <title>MongoDB</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/mongodb/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/mongodb/</guid>
      <description>MongoDB SQL Connector # Scan Source: Bounded Lookup Source: Sync Mode Sink: Batch Sink: Streaming Append &amp;amp; Upsert Mode&#xA;The MongoDB connector allows for reading data from and writing data into MongoDB. This document describes how to set up the MongoDB connector to run SQL queries against MongoDB.&#xA;The connector can operate in upsert mode for exchanging UPDATE/DELETE messages with the external system using the primary key defined on the DDL.</description>
    </item>
    <item>
      <title>Opensearch</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/opensearch/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/opensearch/</guid>
      <description>Opensearch Connector # This connector provides sinks that can request document actions to an Opensearch Index. To use this connector, add the following dependency to your project:&#xA;Opensearch version Maven Dependency 1.x There is no connector (yet) available for Flink version 2.2.&#xA;2.x There is no connector (yet) available for Flink version 2.2.&#xA;By default, Apache Flink Opensearch Connector uses 1.3.x client libraries. You could switch to use 2.x (or upcoming 3.</description>
    </item>
    <item>
      <title>Plugins</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/plugins/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/filesystems/plugins/</guid>
      <description>Plugins # Plugins facilitate a strict separation of code through restricted classloaders. Plugins cannot access classes from other plugins or from Flink that have not been specifically whitelisted. This strict isolation allows plugins to contain conflicting versions of the same library without the need to relocate classes or to converge to common versions. Currently, file systems and metric reporters are pluggable but in the future, connectors, formats, and even user code should also be pluggable.</description>
    </item>
    <item>
      <title>Prometheus</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/prometheus/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/prometheus/</guid>
      <description>Prometheus Sink # This sink connector can be used to write data to Prometheus-compatible storage, using the Remote Write Prometheus interface.&#xA;The Prometheus-compatible backend must support Remote Write 1.0 standard API, and the Remote Write endpoint must be enabled.&#xA;This connector is not meant for sending internal Flink metrics to Prometheus. To publish Flink metrics, for monitoring health and operations of the Flink cluster, you should use Metric Reporters. To use the connector, add the following Maven dependency to your project:</description>
    </item>
    <item>
      <title>SELECT DISTINCT</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/select-distinct/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/select-distinct/</guid>
      <description>SELECT DISTINCT # Batch Streaming&#xA;If SELECT DISTINCT is specified, all duplicate rows are removed from the result set (one row is kept from each group of duplicates).&#xA;SELECT DISTINCT id FROM Orders For streaming queries, the required state for computing the query result might grow infinitely. State size depends on number of distinct rows. You can provide a query configuration with an appropriate state time-to-live (TTL) to prevent excessive state size.</description>
    </item>
    <item>
      <title>Set Operations</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/set-op/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/set-op/</guid>
      <description>Set Operations # Set Operations are used to combine multiple SELECT statements into a single result set. Hive dialect supports the following operations:&#xA;UNION INTERSECT EXCEPT/MINUS UNION # Description # UNION/UNION DISTINCT/UNION ALL returns the rows that are found in either side.&#xA;UNION and UNION DISTINCT only returns the distinct rows, while UNION ALL does not duplicate.&#xA;Syntax # &amp;lt;query&amp;gt; { UNION [ ALL | DISTINCT ] } &amp;lt;query&amp;gt; [ .</description>
    </item>
    <item>
      <title>SHOW Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/show/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/show/</guid>
      <description>SHOW Statements # With Hive dialect, the following SHOW statements are supported for now:&#xA;SHOW DATABASES SHOW TABLES SHOW VIEWS SHOW PARTITIONS SHOW FUNCTIONS SHOW DATABASES # Description # SHOW DATABASES statement is used to list all the databases defined in the metastore.&#xA;Syntax # SHOW (DATABASES|SCHEMAS); The use of SCHEMA and DATABASE are interchangeable - they mean the same thing.&#xA;SHOW TABLES # Description # SHOW TABLES statement lists all the base tables and views in the current database.</description>
    </item>
    <item>
      <title>Speculative Execution</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/speculative_execution/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/speculative_execution/</guid>
      <description>Speculative Execution # This page describes the background of speculative execution, how to use it, and how to check the effectiveness of it.&#xA;Background # Speculative execution is a mechanism to mitigate job slowness which is caused by problematic nodes. A problematic node may have hardware problems, accident I/O busy, or high CPU load. These problems may make the hosted tasks run much slower than tasks on other nodes, and affect the overall execution time of a batch job.</description>
    </item>
    <item>
      <title>SQS</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/sqs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/sqs/</guid>
      <description>Amazon SQS Sink # The SQS sink writes to Amazon SQS using the AWS v2 SDK for Java. Follow the instructions from the Amazon SQS Developer Guide to setup a SQS message queue.&#xA;To use the connector, add the following Maven dependency to your project:&#xA;There is no connector (yet) available for Flink version 2.2.&#xA;Java Properties sinkProperties = new Properties(); // Required sinkProperties.put(AWSConfigConstants.AWS_REGION, &amp;#34;eu-west-1&amp;#34;); // Optional, provide via alternative routes e.</description>
    </item>
    <item>
      <title>Streaming Analytics</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/streaming_analytics/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/streaming_analytics/</guid>
      <description>Streaming Analytics # Event Time and Watermarks # Introduction # Flink explicitly supports three different notions of time:&#xA;event time: the time when an event occurred, as recorded by the device producing (or storing) the event&#xA;ingestion time: a timestamp recorded by Flink at the moment it ingests the event&#xA;processing time: the time when a specific operator in your pipeline is processing the event&#xA;For reproducible results, e.g., when computing the maximum price a stock reached during the first hour of trading on a given day, you should use event time.</description>
    </item>
    <item>
      <title>User-Defined Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/user_defined_functions/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/user_defined_functions/</guid>
      <description>User-Defined Functions # Most operations require a user-defined function. This section lists different ways of how they can be specified. We also cover Accumulators, which can be used to gain insights into your Flink application.&#xA;Implementing an interface # The most basic way is to implement one of the provided interfaces:&#xA;class MyMapFunction implements MapFunction&amp;lt;String, Integer&amp;gt; { public Integer map(String value) { return Integer.parseInt(value); } } data.map(new MyMapFunction()); Anonymous classes # You can pass a function as an anonymous class:</description>
    </item>
    <item>
      <title>YARN</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/yarn/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/resource-providers/yarn/</guid>
      <description>Apache Hadoop YARN # Getting Started # This Getting Started section guides you through setting up a fully functional Flink Cluster on YARN.&#xA;Introduction # Apache Hadoop YARN is a resource provider popular with many data processing frameworks. Flink services are submitted to YARN&amp;rsquo;s ResourceManager, which spawns containers on machines managed by YARN NodeManagers. Flink deploys its JobManager and TaskManager instances into such containers.&#xA;Flink can dynamically allocate and de-allocate TaskManager resources depending on the number of processing slots required by the job(s) running on the JobManager.</description>
    </item>
    <item>
      <title>ALTER Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/alter/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/alter/</guid>
      <description>ALTER Statements # ALTER statements are used to modify the definition of a table, view or function that has already been registered in the Catalog, or the definition of a catalog itself.&#xA;Flink SQL supports the following ALTER statements for now:&#xA;ALTER TABLE ALTER VIEW ALTER DATABASE ALTER FUNCTION ALTER CATALOG ALTER MODEL Run an ALTER statement # Java ALTER statements can be executed with the executeSql() method of the TableEnvironment.</description>
    </item>
    <item>
      <title>Canal</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/canal/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/canal/</guid>
      <description>Canal Format # Changelog-Data-Capture Format Format: Serialization Schema Format: Deserialization Schema&#xA;Canal is a CDC (Changelog Data Capture) tool that can stream changes in real-time from MySQL into other systems. Canal provides a unified format schema for changelog and supports to serialize messages using JSON and protobuf (protobuf is the default format for Canal).&#xA;Flink supports to interpret Canal JSON messages as INSERT/UPDATE/DELETE messages into Flink SQL system. This is useful in many cases to leverage this feature, such as</description>
    </item>
    <item>
      <title>Event-driven Applications</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/event_driven/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/event_driven/</guid>
      <description>Event-driven Applications # Process Functions # Introduction # A ProcessFunction combines event processing with timers and state, making it a powerful building block for stream processing applications. This is the basis for creating event-driven applications with Flink. It is very similar to a RichFlatMapFunction, but with the addition of timers.&#xA;Example # If you&amp;rsquo;ve done the hands-on exercise in the Streaming Analytics training, you will recall that it uses a TumblingEventTimeWindow to compute the sum of the tips for each driver during each hour, like this:</description>
    </item>
    <item>
      <title>FileSystem</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/filesystem/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/filesystem/</guid>
      <description>FileSystem # This connector provides a unified Source and Sink for BATCH and STREAMING that reads or writes (partitioned) files to file systems supported by the Flink FileSystem abstraction. This filesystem connector provides the same guarantees for both BATCH and STREAMING and is designed to provide exactly-once semantics for STREAMING execution.&#xA;The connector supports reading and writing a set of files from any (distributed) file system (e.g. POSIX, S3, HDFS) with a format (e.</description>
    </item>
    <item>
      <title>Flink Operations Playground</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/try-flink/flink-operations-playground/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/try-flink/flink-operations-playground/</guid>
      <description>Flink Operations Playground # There are many ways to deploy and operate Apache Flink in various environments. Regardless of this variety, the fundamental building blocks of a Flink Cluster remain the same, and similar operational principles apply.&#xA;In this playground, you will learn how to manage and run Flink Jobs. You will see how to deploy and monitor an application, experience how Flink recovers from Job failure, and perform everyday operational tasks like upgrades and rescaling.</description>
    </item>
    <item>
      <title>JDBC</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/jdbc/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/jdbc/</guid>
      <description>JDBC SQL Connector # Scan Source: Bounded Lookup Source: Sync Mode Sink: Batch Sink: Streaming Append &amp;amp; Upsert Mode&#xA;The JDBC connector allows for reading data from and writing data into any relational databases with a JDBC driver. This document describes how to setup the JDBC connector to run SQL queries against relational databases.&#xA;The JDBC sink operate in upsert mode for exchange UPDATE/DELETE messages with the external system if a primary key is defined on the DDL, otherwise, it operates in append mode and doesn&amp;rsquo;t support to consume UPDATE/DELETE messages.</description>
    </item>
    <item>
      <title>Lateral View Clause</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/lateral-view/</guid>
      <description>Lateral View Clause # Description # Lateral view clause is used in conjunction with user-defined table generating functions(UDTF) such as explode(). A UDTF generates zero or more output rows for each input row.&#xA;A lateral view first applies the UDTF to each row of base and then joins results output rows to the input rows to form a virtual table having the supplied table alias.&#xA;Syntax # lateralView: LATERAL VIEW [ OUTER ] udtf( expression ) tableAlias AS columnAlias [, .</description>
    </item>
    <item>
      <title>State Backends</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/state_backends/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/fault-tolerance/state_backends/</guid>
      <description>State Backends # Flink provides different state backends that specify how and where state is stored.&#xA;State can be located on Java’s heap or off-heap. Depending on your state backend, Flink can also manage the state for the application, meaning Flink deals with the memory management (possibly spilling to disk if necessary) to allow applications to hold very large state. By default, the Flink configuration file determines the state backend for all Flink jobs.</description>
    </item>
    <item>
      <title>Task Lifecycle</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/internals/task_lifecycle/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/internals/task_lifecycle/</guid>
      <description>Task Lifecycle # A task in Flink is the basic unit of execution. It is the place where each parallel instance of an operator is executed. As an example, an operator with a parallelism of 5 will have each of its instances executed by a separate task.&#xA;The StreamTask is the base for all different task sub-types in Flink&amp;rsquo;s streaming engine. This document goes through the different phases in the lifecycle of the StreamTask and describes the main methods representing each of these phases.</description>
    </item>
    <item>
      <title>Test Dependencies</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/testing/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/testing/</guid>
      <description>Dependencies for Testing # Flink provides utilities for testing your job that you can add as dependencies.&#xA;DataStream API Testing # You need to add the following dependencies if you want to develop tests for a job built with the DataStream API:&#xA;Maven Open the pom.xml file in your project directory and add the following in the dependencies block. &amp;ltdependency&amp;gt &amp;ltgroupId&amp;gtorg.apache.flink&amp;lt/groupId&amp;gt &amp;ltartifactId&amp;gtflink-test-utils&amp;lt/artifactId&amp;gt &amp;ltversion&amp;gt2.2.0&amp;lt/version&amp;gt &amp;ltscope&amp;gttest&amp;lt/scope&amp;gt &amp;lt/dependency&amp;gt Copied to clipboard! Check out Project configuration for more details.</description>
    </item>
    <item>
      <title>Traces</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/traces/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/traces/</guid>
      <description>Traces # Flink exposes a tracing system that allows gathering and exposing traces to external systems.&#xA;Reporting traces # You can access the tracing system from any user function that extends RichFunction by calling getRuntimeContext().getMetricGroup(). This method returns a MetricGroup object via which you can report a new single trace with tree of spans.&#xA;Reporting single Span # A Span represents some process that happened in Flink at certain point of time for a certain duration, that will be reported to a TraceReporter.</description>
    </item>
    <item>
      <title>Troubleshooting</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_trouble/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_trouble/</guid>
      <description>Troubleshooting # IllegalConfigurationException # If you see an IllegalConfigurationException thrown from TaskExecutorProcessUtils or JobManagerProcessUtils, it usually indicates that there is either an invalid configuration value (e.g. negative memory size, fraction that is greater than 1, etc.) or configuration conflicts. Check the documentation chapters or configuration options related to the memory components mentioned in the exception message.&#xA;OutOfMemoryError: Java heap space # The exception usually indicates that the JVM Heap is too small.</description>
    </item>
    <item>
      <title>Windowing TVF</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-tvf/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-tvf/</guid>
      <description>Windowing table-valued functions (Windowing TVFs) # Batch Streaming&#xA;Windows are at the heart of processing infinite streams. Windows split the stream into “buckets” of finite size, over which we can apply computations. This document focuses on how windowing is performed in Flink SQL and how the programmer can benefit to the maximum from its offered functionality.&#xA;Apache Flink provides several window table-valued functions (TVF) to divide the elements of your table into windows, including:</description>
    </item>
    <item>
      <title>ADD Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/add/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/add/</guid>
      <description>ADD Statements # With Hive dialect, the following ADD statements are supported for now:&#xA;ADD JAR ADD JAR # Description # ADD JAR statement is used to add user jars into the classpath. Add multiple jars file in single ADD JAR statement is not supported.&#xA;Syntax # ADD JAR &amp;lt;jar_path&amp;gt;; Parameters # jar_path&#xA;The path of the JAR file to be added. It could be either on a local file or distributed file system.</description>
    </item>
    <item>
      <title>Elasticsearch</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/elasticsearch/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/elasticsearch/</guid>
      <description>Elasticsearch SQL Connector # Sink: Batch Sink: Streaming Append &amp;amp; Upsert Mode&#xA;The Elasticsearch connector allows for writing into an index of the Elasticsearch engine. This document describes how to setup the Elasticsearch Connector to run SQL queries against Elasticsearch.&#xA;The connector can operate in upsert mode for exchanging UPDATE/DELETE messages with the external system using the primary key defined on the DDL.&#xA;If no primary key is defined on the DDL, the connector can only operate in append mode for exchanging INSERT only messages with external system.</description>
    </item>
    <item>
      <title>Events</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/events/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/events/</guid>
      <description>Events # Flink exposes a event reporting system that allows gathering and exposing events to external systems.&#xA;Reporting events # You can access the event system from any user function that extends RichFunction by calling getRuntimeContext().getMetricGroup(). This method returns a MetricGroup object via which you can report a new single event.&#xA;Reporting single Event # An Event represents something that happened in Flink at certain point of time, that will be reported to a TraceReporter.</description>
    </item>
    <item>
      <title>Fault Tolerance</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/fault_tolerance/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/learn-flink/fault_tolerance/</guid>
      <description>Fault Tolerance via State Snapshots # State Backends # The keyed state managed by Flink is a sort of sharded, key/value store, and the working copy of each item of keyed state is kept somewhere local to the taskmanager responsible for that key. Operator state is also local to the machine(s) that need(s) it.&#xA;This state that Flink manages is stored in a state backend. Two implementations of state backends are available &amp;ndash; one based on RocksDB, an embedded key/value store that keeps its working state on disk, and another heap-based state backend that keeps its working state in memory, on the Java heap.</description>
    </item>
    <item>
      <title>INSERT Statement</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/insert/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/insert/</guid>
      <description>INSERT Statement # INSERT statements are used to add rows to a table.&#xA;Run an INSERT statement # Java Single INSERT statement can be executed through the executeSql() method of the TableEnvironment. The executeSql() method for INSERT statement will submit a Flink job immediately, and return a TableResult instance which associates the submitted job. Multiple INSERT statements can be executed through the addInsertSql() method of the StatementSet which can be created by the TableEnvironment.</description>
    </item>
    <item>
      <title>Maxwell</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/maxwell/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/maxwell/</guid>
      <description>Maxwell Format # Changelog-Data-Capture Format Format: Serialization Schema Format: Deserialization Schema&#xA;Maxwell is a CDC (Changelog Data Capture) tool that can stream changes in real-time from MySQL into Kafka, Kinesis and other streaming connectors. Maxwell provides a unified format schema for changelog and supports to serialize messages using JSON.&#xA;Flink supports to interpret Maxwell JSON messages as INSERT/UPDATE/DELETE messages into Flink SQL system. This is useful in many cases to leverage this feature, such as</description>
    </item>
    <item>
      <title>Metric Reporters</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/metric_reporters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/metric_reporters/</guid>
      <description>Metric Reporters # Flink allows reporting metrics to external systems. For more information about Flink&amp;rsquo;s metric system go to the metric system documentation.&#xA;Metrics can be exposed to an external system by configuring one or several reporters in Flink configuration file. These reporters will be instantiated on each job and task manager when they are started.&#xA;Below is a list of parameters that are generally applicable to all reporters. All properties are configured by setting metrics.</description>
    </item>
    <item>
      <title>Migration Guide</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_migration/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/mem_migration/</guid>
      <description>Migration Guide # The memory setup has changed a lot with the 1.10 release for TaskManagers and with the 1.11 release for JobManagers. Many configuration options were removed or their semantics changed. This guide will help you to migrate the TaskManager memory configuration from Flink &amp;lt;= 1.9 to &amp;gt;= 1.10 and the JobManager memory configuration from Flink &amp;lt;= 1.10 to &amp;gt;= 1.11.&#xA;It is important to review this guide because the legacy and new memory configuration can result in different sizes of memory components.</description>
    </item>
    <item>
      <title>Model Inference</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/model-inference/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/model-inference/</guid>
      <description>Model Inference # Streaming Flink SQL provides the ML_PREDICT table-valued function (TVF) to perform model inference in SQL queries. This function allows you to apply machine learning models to your data streams directly in SQL. See Model Creation about how to create a model.&#xA;ML_PREDICT Function # The ML_PREDICT function takes a table input, applies a model to it, and returns a new table with the model&amp;rsquo;s predictions. The function offers support for synchronous/asynchronous inference modes when the underlying model permits both.</description>
    </item>
    <item>
      <title>Opensearch</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/opensearch/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/opensearch/</guid>
      <description>Opensearch SQL Connector # Sink: Batch Sink: Streaming Append &amp;amp; Upsert Mode&#xA;The Opensearch connector allows for writing into an index of the Opensearch engine. This document describes how to setup the Opensearch Connector to run SQL queries against Opensearch.&#xA;The connector can operate in upsert mode for exchanging UPDATE/DELETE messages with the external system using the primary key defined on the DDL.&#xA;If no primary key is defined on the DDL, the connector can only operate in append mode for exchanging INSERT only messages with external system.</description>
    </item>
    <item>
      <title>Python REPL</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/repls/python_shell/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/repls/python_shell/</guid>
      <description>Python REPL # Flink comes with an integrated interactive Python Shell. It can be used in a local setup as well as in a cluster setup. See the standalone resource provider page for more information about how to setup a local Flink. You can also build a local setup from source.&#xA;Note The Python Shell will run the command “python”. Please refer to the Python Table API installation guide on how to set up the Python execution environments.</description>
    </item>
    <item>
      <title>RabbitMQ</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/rabbitmq/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/rabbitmq/</guid>
      <description>RabbitMQ Connector # License of the RabbitMQ Connector # Flink&amp;rsquo;s RabbitMQ connector defines a Maven dependency on the &amp;ldquo;RabbitMQ AMQP Java Client&amp;rdquo;, is triple-licensed under the Mozilla Public License 1.1 (&amp;ldquo;MPL&amp;rdquo;), the GNU General Public License version 2 (&amp;ldquo;GPL&amp;rdquo;) and the Apache License version 2 (&amp;ldquo;ASL&amp;rdquo;).&#xA;Flink itself neither reuses source code from the &amp;ldquo;RabbitMQ AMQP Java Client&amp;rdquo; nor packages binaries from the &amp;ldquo;RabbitMQ AMQP Java Client&amp;rdquo;.&#xA;Users that create and publish derivative work based on Flink&amp;rsquo;s RabbitMQ connector (thereby re-distributing the &amp;ldquo;RabbitMQ AMQP Java Client&amp;rdquo;) must be aware that this may be subject to conditions declared in the Mozilla Public License 1.</description>
    </item>
    <item>
      <title>Trace Reporters</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/trace_reporters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/trace_reporters/</guid>
      <description>Trace Reporters # Flink allows reporting traces to external systems. For more information about Flink&amp;rsquo;s tracing system go to the tracing system documentation.&#xA;Traces can be exposed to an external system by configuring one or several reporters in Flink configuration file. These reporters will be instantiated on each job and task manager when they are started.&#xA;Below is a list of parameters that are generally applicable to all reporters. All properties are configured by setting traces.</description>
    </item>
    <item>
      <title>Vector Search</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/vector-search/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/vector-search/</guid>
      <description>Vector Search # Batch Streaming&#xA;Flink SQL provides the VECTOR_SEARCH table-valued function (TVF) to perform a vector search in SQL queries. This function allows you to search similar rows according to the high-dimension vectors.&#xA;VECTOR_SEARCH Function # The VECTOR_SEARCH uses a processing-time attribute to correlate rows to the latest version of data in an external table. It&amp;rsquo;s very similar to a lookup join in Flink SQL, however, the difference is VECTOR_SEARCH uses the input data vector to compare the similarity with data in the external table and return the top-k most similar rows.</description>
    </item>
    <item>
      <title>Window Aggregation</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-agg/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-agg/</guid>
      <description>Window Aggregation # Window TVF Aggregation # Batch Streaming&#xA;Window aggregations are defined in the GROUP BY clause contains &amp;ldquo;window_start&amp;rdquo; and &amp;ldquo;window_end&amp;rdquo; columns of the relation applied Windowing TVF. Just like queries with regular GROUP BY clauses, queries with a group by window aggregation will compute a single result row per group.&#xA;SELECT ... FROM &amp;lt;windowed_table&amp;gt; -- relation applied windowing TVF GROUP BY window_start, window_end, ... Unlike other aggregations on continuous tables, window aggregation do not emit intermediate results but only a final result, the total aggregation at the end of the window.</description>
    </item>
    <item>
      <title>Window Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/window-functions/</guid>
      <description>Window Functions # Description # Window functions are a kind of aggregation for a group of rows, referred as a window. It will return the aggregation value for each row based on the group of rows.&#xA;Syntax # window_function OVER ( [ { PARTITION | DISTRIBUTE } BY colName ( [, ... ] ) ] { ORDER | SORT } BY expression [ ASC | DESC ] [ NULLS { FIRST | LAST } ] [ , .</description>
    </item>
    <item>
      <title>ANALYZE Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/analyze/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/analyze/</guid>
      <description>ANALYZE Statements # ANALYZE statements are used to collect statistics for existing tables and store the result to catalog. Only ANALYZE TABLE statements are supported now, and need to be triggered manually instead of automatically.&#xA;Attention Currently, ANALYZE TABLE only supports in batch mode. Only existing table is supported, and an exception will be thrown if the table is a view or table not exists.&#xA;Run an ANALYZE TABLE statement # Java ANALYZE TABLE statements can be executed with the executeSql() method of the TableEnvironment.</description>
    </item>
    <item>
      <title>Checkpoints</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/checkpoints/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/checkpoints/</guid>
      <description>Checkpoints # Overview # Checkpoints make state in Flink fault tolerant by allowing state and the corresponding stream positions to be recovered, thereby giving the application the same semantics as a failure-free execution.&#xA;See Checkpointing for how to enable and configure checkpoints for your program.&#xA;To understand the differences between checkpoints and savepoints see checkpoints vs. savepoints.&#xA;Checkpoint Storage # When checkpointing is enabled, managed state is persisted to ensure consistent recovery in case of failures.</description>
    </item>
    <item>
      <title>DESCRIBE Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/describe/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/describe/</guid>
      <description>DESCRIBE Statements # DESCRIBE statements are used to describe the schema of a table or a view, or the metadata of a catalog or a function, or the specified job in the Flink cluster.&#xA;Run a DESCRIBE statement # Java DESCRIBE statements can be executed with the executeSql() method of the TableEnvironment. The executeSql() method returns objects for a successful DESCRIBE operation, otherwise will throw an exception.&#xA;The following examples show how to run a DESCRIBE statement in TableEnvironment.</description>
    </item>
    <item>
      <title>Event Reporters</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/event_reporters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/event_reporters/</guid>
      <description>Event Reporters # Flink allows reporting events (structured logging) to external systems. For more information about Flink&amp;rsquo;s event reporting system go to the events system documentation.&#xA;Events can be exposed to an external system by configuring one or several reporters in Flink configuration file. These reporters will be instantiated on each job and task manager when they are started.&#xA;Below is a list of parameters that are generally applicable to all reporters.</description>
    </item>
    <item>
      <title>FileSystem</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/filesystem/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/filesystem/</guid>
      <description>FileSystem SQL Connector # This connector provides access to partitioned files in filesystems supported by the Flink FileSystem abstraction.&#xA;The file system connector itself is included in Flink and does not require an additional dependency. The corresponding jar can be found in the Flink distribution inside the /lib directory. A corresponding format needs to be specified for reading and writing rows from and to a file system.&#xA;The file system connector allows for reading and writing from a local or distributed filesystem.</description>
    </item>
    <item>
      <title>Google Cloud PubSub</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/pubsub/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/pubsub/</guid>
      <description>Google Cloud PubSub # This connector provides a Source and Sink that can read from and write to Google Cloud PubSub. To use this connector, add the following dependency to your project:&#xA;There is no connector (yet) available for Flink version 2.2.&#xA;Note: This connector has been added to Flink recently. It has not received widespread testing yet. Note that the streaming connectors are currently not part of the binary distribution.</description>
    </item>
    <item>
      <title>Group Aggregation</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/group-agg/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/group-agg/</guid>
      <description>Group Aggregation # Batch Streaming&#xA;Like most data systems, Apache Flink supports aggregate functions; both built-in and user-defined. User-defined functions must be registered in a catalog before use.&#xA;An aggregate function computes a single result from multiple input rows. For example, there are aggregates to compute the COUNT, SUM, AVG (average), MAX (maximum) and MIN (minimum) over a set of rows.&#xA;SELECT COUNT(*) FROM Orders For streaming queries, it is important to understand that Flink runs continuous queries that never terminate.</description>
    </item>
    <item>
      <title>Hybrid Source</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/hybridsource/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/hybridsource/</guid>
      <description>Hybrid Source # HybridSource is a source that contains a list of concrete sources. It solves the problem of sequentially reading input from heterogeneous sources to produce a single input stream.&#xA;For example, a bootstrap use case may need to read several days worth of bounded input from S3 before continuing with the latest unbounded input from Kafka. HybridSource switches from FileSource to KafkaSource when the bounded file input finishes without interrupting the application.</description>
    </item>
    <item>
      <title>Ogg</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/ogg/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/ogg/</guid>
      <description>Ogg Format # Changelog-Data-Capture Format Format: Serialization Schema Format: Deserialization Schema&#xA;Oracle GoldenGate (a.k.a ogg) is a managed service providing a real-time data mesh platform, which uses replication to keep data highly available, and enabling real-time analysis. Customers can design, execute, and monitor their data replication and stream data processing solutions without the need to allocate or manage compute environments. Ogg provides a format schema for changelog and supports to serialize messages using JSON.</description>
    </item>
    <item>
      <title>Processing Timer Service</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/time-processing/processing_timer_service/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/time-processing/processing_timer_service/</guid>
      <description>Note: DataStream API V2 is a new set of APIs, to gradually replace the original DataStream API. It is currently in the experimental stage and is not fully available for production. Processing Timer Service # The process timer service is a fundamental primitive of the Flink DataStream API, provided by Flink. It allows users to register timers for performing calculations at specific processing time points.&#xA;For a comprehensive explanation of processing time, please refer to the section on Notions of Time: Event Time and Processing Time.</description>
    </item>
    <item>
      <title>REST API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/rest_api/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/rest_api/</guid>
      <description>REST API # Flink has a monitoring API that can be used to query status and statistics of running jobs, as well as recent completed jobs. This monitoring API is used by Flink&amp;rsquo;s own dashboard, but is designed to be used also by custom monitoring tools.&#xA;The monitoring API is a REST-ful API that accepts HTTP requests and responds with JSON data.&#xA;Overview # The monitoring API is backed by a web server that runs as part of the JobManager.</description>
    </item>
    <item>
      <title>SET Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/set/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/set/</guid>
      <description>SET Statements # Description # The SET statement sets a property which provide a ways to set variables for a session and configuration property including system variable and Hive configuration. But environment variable can&amp;rsquo;t be set via SET statement. The behavior of SET with Hive dialect is compatible to Hive&amp;rsquo;s.&#xA;EXAMPLES # -- set Flink&amp;#39;s configuration SET table.sql-dialect=default; -- set Hive&amp;#39;s configuration SET hiveconf:k1=v1; -- set system property SET system:k2=v2; -- set vairable for current session SET hivevar:k3=v3; -- get value for configuration SET table.</description>
    </item>
    <item>
      <title>Sub-Queries</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/sub-queries/</guid>
      <description>Sub-Queries # Sub-Queries in the FROM Clause # Description # Hive dialect supports sub-queries in the FROM clause. The sub-query has to be given a name because every table in a FROM clause must have a name. Columns in the sub-query select list must have unique names. The columns in the sub-query select list are available in the outer query just like columns of a table. The sub-query can also be a query expression with UNION.</description>
    </item>
    <item>
      <title>TRUNCATE Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/truncate/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/truncate/</guid>
      <description>TRUNCATE Statements # Batch TRUNCATE statements are used to delete all rows from a table without dropping the table itself.&#xA;Attention Currently, TRUNCATE statement is supported in batch mode, and it requires the target table connector implements the SupportsTruncate interface to support the row-level delete. An exception will be thrown if trying to TRUNCATE a table which has not implemented the related interface.&#xA;Run a TRUNCATE statement # Java TRUNCATE statement can be executed with the executeSql() method of the TableEnvironment.</description>
    </item>
    <item>
      <title>Windows</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/builtin-funcs/windows/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/builtin-funcs/windows/</guid>
      <description>Note: DataStream API V2 is a new set of APIs, to gradually replace the original DataStream API. It is currently in the experimental stage and is not fully available for production. Windows # Windows are at the heart of processing infinite streams. Windows split the stream into &amp;ldquo;buckets&amp;rdquo; of finite size, over which we can apply computations. This document focuses on how windowing is performed in Flink DataStream and how the programmer can benefit to the maximum from its offered functionality.</description>
    </item>
    <item>
      <title>Checkpointing under backpressure</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/checkpointing_under_backpressure/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/checkpointing_under_backpressure/</guid>
      <description>Checkpointing under backpressure # Normally aligned checkpointing time is dominated by the synchronous and asynchronous parts of the checkpointing process. However, when a Flink job is running under heavy backpressure, the dominant factor in the end-to-end time of a checkpoint can be the time to propagate checkpoint barriers to all operators/subtasks. This is explained in the overview of the checkpointing process). and can be observed by high alignment time and start delay metrics.</description>
    </item>
    <item>
      <title>CTE</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/cte/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/cte/</guid>
      <description>Common Table Expression (CTE) # Description # A Common Table Expression (CTE) is a temporary result set derived from a query specified in a WITH clause, which immediately precedes a SELECT or INSERT keyword. The CTE is defined only with the execution scope of a single statement, and can be referred in the scope.&#xA;Syntax # withClause: WITH cteClause [ , ... ] cteClause: cte_name AS (select statement) Note:</description>
    </item>
    <item>
      <title>Event Timer Service</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/time-processing/event_timer_service/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/time-processing/event_timer_service/</guid>
      <description>Note: DataStream API V2 is a new set of APIs, to gradually replace the original DataStream API. It is currently in the experimental stage and is not fully available for production. Event Timer Service # The event timer service is a high-level extension of the Flink DataStream API, provided by Flink. It enables users to register timers for executing calculations at specific event time points and helps determine when to trigger windows within the Flink framework.</description>
    </item>
    <item>
      <title>EXPLAIN Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/explain/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/explain/</guid>
      <description>EXPLAIN Statements # EXPLAIN statements are used to explain the logical and optimized query plans of a query or an INSERT statement.&#xA;Run an EXPLAIN statement # Java EXPLAIN statements can be executed with the executeSql() method of the TableEnvironment. The executeSql() method returns explain result for a successful EXPLAIN operation, otherwise will throw an exception.&#xA;The following examples show how to run an EXPLAIN statement in TableEnvironment.&#xA;Scala EXPLAIN statements can be executed with the executeSql() method of the TableEnvironment.</description>
    </item>
    <item>
      <title>HBase</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hbase/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/hbase/</guid>
      <description>HBase SQL Connector # Scan Source: Bounded Lookup Source: Sync Mode Sink: Batch Sink: Streaming Upsert Mode&#xA;The HBase connector allows for reading from and writing to an HBase cluster. This document describes how to setup the HBase Connector to run SQL queries against HBase.&#xA;HBase always works in upsert mode for exchange changelog messages with the external system using a primary key defined on the DDL. The primary key must be defined on the HBase rowkey field (rowkey field must be declared).</description>
    </item>
    <item>
      <title>Joining</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/builtin-funcs/joining/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/builtin-funcs/joining/</guid>
      <description>Note: DataStream API V2 is a new set of APIs, to gradually replace the original DataStream API. It is currently in the experimental stage and is not fully available for production. Joining # Join is used to merge two data streams by matching elements from both streams based on a common key, and performing calculations on the matched elements.&#xA;This section will introduce the Join operation in DataStream in detail.</description>
    </item>
    <item>
      <title>Over Aggregation</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/over-agg/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/over-agg/</guid>
      <description>Over Aggregation # Batch Streaming&#xA;OVER aggregates compute an aggregated value for every input row over a range of ordered rows. In contrast to GROUP BY aggregates, OVER aggregates do not reduce the number of result rows to a single row for every group. Instead OVER aggregates produce an aggregated value for every input row.&#xA;The following query computes for every order the sum of amounts of all orders for the same product that were received within one hour before the current order.</description>
    </item>
    <item>
      <title>Parquet</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/parquet/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/parquet/</guid>
      <description>Parquet Format # Format: Serialization Schema Format: Deserialization Schema&#xA;The Apache Parquet format allows to read and write Parquet data.&#xA;Dependencies # In order to use the Parquet format the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.&#xA;Maven dependency SQL Client &amp;ltdependency&amp;gt &amp;ltgroupId&amp;gtorg.apache.flink&amp;lt/groupId&amp;gt &amp;ltartifactId&amp;gtflink-parquet&amp;lt/artifactId&amp;gt &amp;ltversion&amp;gt2.2.0&amp;lt/version&amp;gt &amp;lt/dependency&amp;gt Copied to clipboard! Download How to create a table with Parquet format # Here is an example to create a table using Filesystem connector and Parquet format.</description>
    </item>
    <item>
      <title>Pulsar</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/pulsar/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/pulsar/</guid>
      <description>Apache Pulsar Connector # Flink provides an Apache Pulsar connector for reading and writing data from and to Pulsar topics with exactly-once guarantees.&#xA;Dependency # You can use the connector with the Pulsar 2.10.0 or higher. It is recommended to always use the latest Pulsar version. The details on Pulsar compatibility can be found in PIP-72.&#xA;There is no connector (yet) available for Flink version 2.2.&#xA;In order to use the in PyFlink jobs, the following dependencies are required: Version PyFlink JAR flink-connector-pulsar There is no SQL jar (yet) available for Flink version 2.</description>
    </item>
    <item>
      <title>Savepoints</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/savepoints/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/savepoints/</guid>
      <description>Savepoints # What is a Savepoint? # A Savepoint is a consistent image of the execution state of a streaming job, created via Flink&amp;rsquo;s checkpointing mechanism. You can use Savepoints to stop-and-resume, fork, or update your Flink jobs. Savepoints consist of two parts: a directory with (typically large) binary files on stable storage (e.g. HDFS, S3, &amp;hellip;) and a (relatively small) meta data file. The files on stable storage represent the net data of the job&amp;rsquo;s execution state image.</description>
    </item>
    <item>
      <title>Advanced Configuration</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/advanced/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/configuration/advanced/</guid>
      <description>Advanced Configuration Topics # Anatomy of the Flink distribution # Flink itself consists of a set of classes and dependencies that form the core of Flink&amp;rsquo;s runtime and must be present when a Flink application is started. The classes and dependencies needed to run the system handle areas such as coordination, networking, checkpointing, failover, APIs, operators (such as windowing), resource management, etc.&#xA;These core classes and dependencies are packaged in the flink-dist.</description>
    </item>
    <item>
      <title>Checkpoints vs. Savepoints</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/checkpoints_vs_savepoints/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/checkpoints_vs_savepoints/</guid>
      <description>Checkpoints vs. Savepoints # Overview # Conceptually, Flink&amp;rsquo;s savepoints are different from checkpoints in a way that&amp;rsquo;s analogous to how backups are different from recovery logs in traditional database systems.&#xA;The primary purpose of checkpoints is to provide a recovery mechanism in case of unexpected job failures. A checkpoint&amp;rsquo;s lifecycle is managed by Flink, i.e. a checkpoint is created, owned, and released by Flink - without user interaction. Because checkpoints are being triggered often, and are relied upon for failure recovery, the two main design goals for the checkpoint implementation are i) being as lightweight to create and ii) being as fast to restore from as possible.</description>
    </item>
    <item>
      <title>JDBC</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/jdbc/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/datastream/jdbc/</guid>
      <description>JDBC Connector # This connector provides a sink that writes data to a JDBC database.&#xA;To use it, add the following dependency to your project (along with your JDBC driver):&#xA;There is no connector (yet) available for Flink version 2.2.&#xA;Note that the streaming connectors are currently NOT part of the binary distribution. See how to link with them for cluster execution here. A driver dependency is also required to connect to a specified database.</description>
    </item>
    <item>
      <title>Joins</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/joins/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/joins/</guid>
      <description>Joins # Batch Streaming&#xA;Flink SQL supports complex and flexible join operations over dynamic tables. There are several different types of joins to account for the wide variety of semantics queries may require.&#xA;By default, the order of joins is not optimized. Tables are joined in the order in which they are specified in the FROM clause. You can tweak the performance of your join queries, by listing the tables with the lowest update frequency first and the tables with the highest update frequency last.</description>
    </item>
    <item>
      <title>Orc</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/orc/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/orc/</guid>
      <description>Orc Format # Format: Serialization Schema Format: Deserialization Schema&#xA;The Apache Orc format allows to read and write Orc data.&#xA;Dependencies # In order to use the ORC format the following dependencies are required for both projects using a build automation tool (such as Maven or SBT) and SQL Client with SQL JAR bundles.&#xA;Maven dependency SQL Client &amp;ltdependency&amp;gt &amp;ltgroupId&amp;gtorg.apache.flink&amp;lt/groupId&amp;gt &amp;ltartifactId&amp;gtflink-orc&amp;lt/artifactId&amp;gt &amp;ltversion&amp;gt2.2.0&amp;lt/version&amp;gt &amp;lt/dependency&amp;gt Copied to clipboard! Download How to create a table with Orc format # Here is an example to create a table using Filesystem connector and Orc format.</description>
    </item>
    <item>
      <title>Transform Clause</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/transform/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/transform/</guid>
      <description>Transform Clause # Description # The TRANSFORM clause allows user to transform inputs using user-specified command or script.&#xA;Syntax # query: SELECT TRANSFORM ( expression [ , ... ] ) [ inRowFormat ] [ inRecordWriter ] USING command_or_script [ AS colName [ colType ] [ , ... ] ] [ outRowFormat ] [ outRecordReader ] rowFormat : ROW FORMAT (DELIMITED [FIELDS TERMINATED BY char] [COLLECTION ITEMS TERMINATED BY char] [MAP KEYS TERMINATED BY char] [ESCAPED BY char] [LINES SEPARATED BY char] | SERDE serde_name [WITH SERDEPROPERTIES property_name=property_value, property_name=property_value, .</description>
    </item>
    <item>
      <title>Upgrading Applications and Flink Versions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/upgrading/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/upgrading/</guid>
      <description>Upgrading Applications and Flink Versions # Flink DataStream programs are typically designed to run for long periods of time such as weeks, months, or even years. As with all long-running services, Flink streaming applications need to be maintained, which includes fixing bugs, implementing improvements, or migrating an application to a Flink cluster of a later version.&#xA;This document describes how to update a Flink streaming application and how to migrate a running streaming application to a different Flink cluster.</description>
    </item>
    <item>
      <title>USE Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/use/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/use/</guid>
      <description>USE Statements # USE statements are used to set the current database or catalog, or change the resolution order and enabled status of module.&#xA;Run a USE statement # Java USE statements can be executed with the executeSql() method of the TableEnvironment. The executeSql() method returns &amp;lsquo;OK&amp;rsquo; for a successful USE operation, otherwise will throw an exception.&#xA;The following examples show how to run a USE statement in TableEnvironment.</description>
    </item>
    <item>
      <title>Vectorized User-defined Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/udfs/vectorized_python_udfs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/udfs/vectorized_python_udfs/</guid>
      <description>Vectorized User-defined Functions # Vectorized Python user-defined functions are functions which are executed by transferring a batch of elements between JVM and Python VM in Arrow columnar format. The performance of vectorized Python user-defined functions are usually much higher than non-vectorized Python user-defined functions as the serialization/deserialization overhead and invocation overhead are much reduced. Besides, users could leverage the popular Python libraries such as Pandas, Numpy, etc for the vectorized Python user-defined functions implementation.</description>
    </item>
    <item>
      <title>Window JOIN</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-join/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-join/</guid>
      <description>Window Join # Batch Streaming&#xA;A window join adds the dimension of time into the join criteria themselves. In doing so, the window join joins the elements of two streams that share a common key and are in the same window. The semantic of window join is same to the DataStream window join&#xA;For streaming queries, unlike other joins on continuous tables, window join does not emit intermediate results but only emits final results at the end of the window.</description>
    </item>
    <item>
      <title>Data Sources</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/sources/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/sources/</guid>
      <description>Data Sources # This page describes Flink&amp;rsquo;s Data Source API and the concepts and architecture behind it. Read this, if you are interested in how data sources in Flink work, or if you want to implement a new Data Source.&#xA;If you are looking for pre-defined source connectors, please check the Connector Docs.&#xA;Data Source Concepts # Core Components&#xA;A Data Source has three core components: Splits, the SplitEnumerator, and the SourceReader.</description>
    </item>
    <item>
      <title>File Systems</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/internals/filesystems/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/internals/filesystems/</guid>
      <description>File Systems # Flink has its own file system abstraction via the org.apache.flink.core.fs.FileSystem class. This abstraction provides a common set of operations and minimal guarantees across various types of file system implementations.&#xA;The FileSystem&amp;rsquo;s set of available operations is quite limited, in order to support a wide range of file systems. For example, appending to or mutating existing files is not supported.&#xA;File systems are identified by a file system scheme, such as file://, hdfs://, etc.</description>
    </item>
    <item>
      <title>Production Readiness Checklist</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/production_ready/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/production_ready/</guid>
      <description>Production Readiness Checklist # The production readiness checklist provides an overview of configuration options that should be carefully considered before bringing an Apache Flink job into production. While the Flink community has attempted to provide sensible defaults for each configuration, it is important to review this list and ensure the options chosen are sufficient for your needs.&#xA;Set An Explicit Max Parallelism # The max parallelism, set on a per-job and per-operator granularity, determines the maximum parallelism to which a stateful operator can scale.</description>
    </item>
    <item>
      <title>Raw</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/raw/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/formats/raw/</guid>
      <description>Raw Format # Format: Serialization Schema Format: Deserialization Schema&#xA;The Raw format allows to read and write raw (byte based) values as a single column.&#xA;Note: this format encodes null values as null of byte[] type. This may have limitation when used in upsert-kafka, because upsert-kafka treats null values as a tombstone message (DELETE on the key). Therefore, we recommend avoiding using upsert-kafka connector and the raw format as a value.</description>
    </item>
    <item>
      <title>Set Operations</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/set-ops/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/set-ops/</guid>
      <description>Set Operations # Batch Streaming&#xA;UNION # UNION and UNION ALL return the rows that are found in either table. UNION takes only distinct rows while UNION ALL does not remove duplicates from the result rows.&#xA;Flink SQL&amp;gt; create view t1(s) as values (&amp;#39;c&amp;#39;), (&amp;#39;a&amp;#39;), (&amp;#39;b&amp;#39;), (&amp;#39;b&amp;#39;), (&amp;#39;c&amp;#39;); Flink SQL&amp;gt; create view t2(s) as values (&amp;#39;d&amp;#39;), (&amp;#39;e&amp;#39;), (&amp;#39;a&amp;#39;), (&amp;#39;b&amp;#39;), (&amp;#39;b&amp;#39;); Flink SQL&amp;gt; (SELECT s FROM t1) UNION (SELECT s FROM t2); +---+ | s| +---+ | c| | a| | b| | d| | e| +---+ Flink SQL&amp;gt; (SELECT s FROM t1) UNION ALL (SELECT s FROM t2); +---+ | c| +---+ | c| | a| | b| | b| | c| | d| | e| | a| | b| | b| +---+ INTERSECT # INTERSECT and INTERSECT ALL return the rows that are found in both tables.</description>
    </item>
    <item>
      <title>SHOW Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/show/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/show/</guid>
      <description>SHOW Statements # SHOW statements are used to list objects within their corresponding parent, such as catalogs, databases, tables and views, columns, functions, and modules. See the individual commands for more details and additional options.&#xA;SHOW CREATE statements are used to print a DDL statement with which a given object can be created. The currently &amp;lsquo;SHOW CREATE&amp;rsquo; statement is only available in printing DDL statement of the given table and view.</description>
    </item>
    <item>
      <title>Table Sample</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/hive-compatibility/hive-dialect/queries/table-sample/</guid>
      <description> Table Sample # Description # The TABLESAMPLE statement is used to sample rows from the table.&#xA;Syntax # TABLESAMPLE ( num_rows ROWS ) Note: Currently, only sample specific number of rows is supported. Parameters # num_rows ROWS&#xA;num_rows is a constant positive to specify how many rows to sample.&#xA;Examples # SELECT * FROM src TABLESAMPLE (5 ROWS) </description>
    </item>
    <item>
      <title>Watermark</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/watermark/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream-v2/watermark/</guid>
      <description>Note: DataStream API V2 is a new set of APIs, to gradually replace the original DataStream API. It is currently in the experimental stage and is not fully available for production. Watermark # Before introducing Watermark, users should be aware that Watermark in DataStream V2 does not refer to the original Watermark that measure progress in event time, but is a special event that can be customized by the user and can be propagated along the streams.</description>
    </item>
    <item>
      <title>Data Sinks</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/sinks/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/sinks/</guid>
      <description>Data Sinks # This page describes Flink&amp;rsquo;s Data Sink API and the concepts and architecture behind it. Read this, if you are interested in how data sinks in Flink work, or if you want to implement a new Data Sink.&#xA;If you are looking for pre-defined sink connectors, please check the Connector Docs.&#xA;The Data Sink API # This section describes the major interfaces of the new Sink API introduced in FLIP-191 and FLIP-372, and provides tips to the developers on the Sink development.</description>
    </item>
    <item>
      <title>LOAD Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/load/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/load/</guid>
      <description>LOAD Statements # LOAD statements are used to load a built-in or user-defined module.&#xA;Run a LOAD statement # Java LOAD statements can be executed with the executeSql() method of the TableEnvironment. The executeSql() method returns &amp;lsquo;OK&amp;rsquo; for a successful LOAD operation; otherwise, it will throw an exception.&#xA;The following examples show how to run a LOAD statement in TableEnvironment.&#xA;Scala LOAD statements can be executed with the executeSql() method of the TableEnvironment.</description>
    </item>
    <item>
      <title>ORDER BY clause</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/orderby/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/orderby/</guid>
      <description>ORDER BY clause # Batch Streaming&#xA;The ORDER BY clause causes the result rows to be sorted according to the specified expression(s). If two rows are equal according to the leftmost expression, they are compared according to the next expression and so on. If they are equal according to all specified expressions, they are returned in an implementation-dependent order.&#xA;When running in streaming mode, the primary sort order of a table must be ascending on a time attribute.</description>
    </item>
    <item>
      <title>State Backends</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/state_backends/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/state_backends/</guid>
      <description>State Backends # Programs written in the Data Stream API often hold state in various forms:&#xA;Windows gather elements or aggregates until they are triggered Transformation functions may use the key/value state interface to store values Transformation functions may implement the CheckpointedFunction interface to make their local variables fault tolerant See also state section in the streaming API guide.&#xA;When checkpointing is activated, such state is persisted upon checkpoints to guard against data loss and recover consistently.</description>
    </item>
    <item>
      <title>DataGen</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/datagen/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/datagen/</guid>
      <description>DataGen SQL Connector # Scan Source: Bounded Scan Source: UnBounded&#xA;The DataGen connector allows for creating tables based on in-memory data generation. This is useful when developing queries locally without access to external systems such as Kafka. Tables can include Computed Column syntax which allows for flexible record generation.&#xA;The DataGen connector is built-in, no additional dependencies are required.&#xA;Usage # By default, a DataGen table will create an unbounded number of rows with a random value for each column.</description>
    </item>
    <item>
      <title>LIMIT clause</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/limit/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/limit/</guid>
      <description>LIMIT clause # Batch LIMIT clause constrains the number of rows returned by the SELECT statement. In general, this clause is used in conjunction with ORDER BY to ensure that the results are deterministic.&#xA;The following example selects the first 3 rows in Orders table.&#xA;SELECT * FROM Orders ORDER BY orderTime LIMIT 3 Back to top</description>
    </item>
    <item>
      <title>Tuning Checkpoints and Large State</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/large_state_tuning/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/large_state_tuning/</guid>
      <description>Tuning Checkpoints and Large State # This page gives a guide how to configure and tune applications that use large state.&#xA;Overview # For Flink applications to run reliably at large scale, two conditions must be fulfilled:&#xA;The application needs to be able to take checkpoints reliably&#xA;The resources need to be sufficient catch up with the input data streams after a failure&#xA;The first sections discuss how to get well performing checkpoints at scale.</description>
    </item>
    <item>
      <title>UNLOAD Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/unload/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/unload/</guid>
      <description>UNLOAD Statements # UNLOAD statements are used to unload a built-in or user-defined module.&#xA;Run a UNLOAD statement # Java UNLOAD statements can be executed with the executeSql() method of the TableEnvironment. The executeSql() method returns &amp;lsquo;OK&amp;rsquo; for a successful UNLOAD operation; otherwise it will throw an exception.&#xA;The following examples show how to run a UNLOAD statement in TableEnvironment.&#xA;Scala UNLOAD statements can be executed with the executeSql() method of the TableEnvironment.</description>
    </item>
    <item>
      <title>Print</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/print/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/print/</guid>
      <description>Print SQL Connector # Sink The Print connector allows for writing every row to the standard output or standard error stream.&#xA;It is designed for:&#xA;Easy test for streaming job. Very useful in production debugging. Four possible format options:&#xA;Print Condition1 Condition2 PRINT_IDENTIFIER:taskId&gt; output PRINT_IDENTIFIER provided parallelism &gt; 1 PRINT_IDENTIFIER&gt; output PRINT_IDENTIFIER provided parallelism == 1 taskId&gt; output no PRINT_IDENTIFIER provided parallelism &gt; 1 output no PRINT_IDENTIFIER provided parallelism == 1 The output string format is &amp;ldquo;$row_kind(f0,f1,f2&amp;hellip;)&amp;rdquo;, row_kind is the short string of RowKind, example is: &amp;ldquo;+I(1,1)&amp;rdquo;.</description>
    </item>
    <item>
      <title>SET Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/set/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/set/</guid>
      <description>SET Statements # SET statements are used to modify the configuration or list the configuration.&#xA;Run a SET statement # SQL CLI SET statements can be executed in SQL CLI.&#xA;The following examples show how to run a SET statement in SQL CLI.&#xA;SQL CLI Flink SQL&amp;gt; SET &amp;#39;table.local-time-zone&amp;#39; = &amp;#39;Europe/Berlin&amp;#39;; [INFO] Session property has been set. Flink SQL&amp;gt; SET; &amp;#39;table.local-time-zone&amp;#39; = &amp;#39;Europe/Berlin&amp;#39; Syntax # SET (&amp;#39;key&amp;#39; = &amp;#39;value&amp;#39;)? If no key and value are specified, it just prints all the properties.</description>
    </item>
    <item>
      <title>Top-N</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/topn/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/topn/</guid>
      <description>Top-N # Batch Streaming&#xA;Top-N queries ask for the N smallest or largest values ordered by columns. Both smallest and largest values sets are considered Top-N queries. Top-N queries are useful in cases where the need is to display only the N bottom-most or the N top- most records from batch/streaming table on a condition. This result set can be used for further analysis.&#xA;Flink uses the combination of a OVER window clause and a filter condition to express a Top-N query.</description>
    </item>
    <item>
      <title>BlackHole</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/blackhole/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/blackhole/</guid>
      <description>BlackHole SQL Connector # Sink: Bounded Sink: UnBounded&#xA;The BlackHole connector allows for swallowing all input records. It is designed for:&#xA;high performance testing. UDF to output, not substantive sink. Just like /dev/null device on Unix-like operating systems.&#xA;The BlackHole connector is built-in.&#xA;How to create a BlackHole table # CREATE TABLE blackhole_table ( f0 INT, f1 INT, f2 STRING, f3 DOUBLE ) WITH ( &amp;#39;connector&amp;#39; = &amp;#39;blackhole&amp;#39; ); Alternatively, it may be based on an existing schema using the LIKE Clause.</description>
    </item>
    <item>
      <title>RESET Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/reset/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/reset/</guid>
      <description>RESET Statements # RESET statements are used to reset the configuration to the default.&#xA;Run a RESET statement # SQL CLI RESET statements can be executed in SQL CLI.&#xA;The following examples show how to run a RESET statement in SQL CLI.&#xA;SQL CLI Flink SQL&amp;gt; RESET &amp;#39;table.planner&amp;#39;; [INFO] Session property has been reset. Flink SQL&amp;gt; RESET; [INFO] All session properties have been set to their default values. Syntax # RESET (&amp;#39;key&amp;#39;)?</description>
    </item>
    <item>
      <title>Window Top-N</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-topn/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-topn/</guid>
      <description>Window Top-N # Batch Streaming&#xA;Window Top-N is a special Top-N which returns the N smallest or largest values for each window and other partitioned keys.&#xA;For streaming queries, unlike regular Top-N on continuous tables, window Top-N does not emit intermediate results but only a final result, the total top N records at the end of the window. Moreover, window Top-N purges all intermediate state when no longer needed. Therefore, window Top-N queries have better performance if users don&amp;rsquo;t need results updated per record.</description>
    </item>
    <item>
      <title>Deduplication</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/deduplication/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/deduplication/</guid>
      <description>Deduplication # Batch Streaming&#xA;Deduplication removes rows that duplicate over a set of columns, keeping only the first one or the last one. In some cases, the upstream ETL jobs are not end-to-end exactly-once; this may result in duplicate records in the sink in case of failover. However, the duplicate records will affect the correctness of downstream analytical jobs - e.g. SUM, COUNT - so deduplication is needed before further analysis.</description>
    </item>
    <item>
      <title>JAR Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/jar/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/jar/</guid>
      <description>JAR Statements # JAR statements are used to add user jars into the classpath or remove user jars from the classpath or show added jars in the classpath in the runtime.&#xA;Flink SQL supports the following JAR statements for now:&#xA;ADD JAR SHOW JARS REMOVE JAR Run a JAR statement # SQL CLI The following examples show how to run JAR statements in SQL CLI. SQL CLI Flink SQL&amp;gt; ADD JAR &amp;#39;/path/hello.</description>
    </item>
    <item>
      <title>JOB Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/job/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/job/</guid>
      <description>JOB Statements # Job statements are used for management of Flink jobs.&#xA;Flink SQL supports the following JOB statements for now:&#xA;SHOW JOBS DESCRIBE JOB STOP JOB Run a JOB statement # SQL CLI The following examples show how to run JOB statements in SQL CLI. SQL CLI Flink SQL&amp;gt; SHOW JOBS; +----------------------------------+----------+---------+-------------------------+ | job id | job name | status | start time | +----------------------------------+----------+---------+-------------------------+ | 228d70913eab60dda85c5e7f78b5782c | myjob | RUNNING | 2023-02-11T05:03:51.</description>
    </item>
    <item>
      <title>Window Deduplication</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-deduplication/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/window-deduplication/</guid>
      <description>Window Deduplication # Streaming Window Deduplication is a special Deduplication which removes rows that duplicate over a set of columns, keeping the first one or the last one for each window and partitioned keys.&#xA;For streaming queries, unlike regular Deduplicate on continuous tables, Window Deduplication does not emit intermediate results but only a final result at the end of the window. Moreover, window Deduplication purges all intermediate state when no longer needed.</description>
    </item>
    <item>
      <title>Pattern Recognition</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/match_recognize/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/match_recognize/</guid>
      <description>Pattern Recognition # Streaming It is a common use case to search for a set of event patterns, especially in case of data streams. Flink comes with a complex event processing (CEP) library which allows for pattern detection in event streams. Furthermore, Flink&amp;rsquo;s SQL API provides a relational way of expressing queries with a large set of built-in functions and rule-based optimizations that can be used out of the box.</description>
    </item>
    <item>
      <title>UPDATE Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/update/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/update/</guid>
      <description>UPDATE Statements # UPDATE statement is used to perform row-level updating on the target table according to the filter if provided.&#xA;Attention Currently, UPDATE statement only supports in batch mode, and it requires the target table connector implements the SupportsRowLevelUpdate interface to support the row-level update. An exception will be thrown if trying to UPDATE the table which has not implements the related interface. Currently, there is no existing connector maintained by flink has supported UPDATE yet.</description>
    </item>
    <item>
      <title>DELETE Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/delete/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/delete/</guid>
      <description>DELETE Statements # DELETE statement is used to perform row-level deletion on the target table according to the filter if provided.&#xA;Attention Currently, DELETE statement only supports in batch mode, and it requires the target table connector implements the SupportsRowLevelDelete interface to support the row-level delete. An exception will be thrown if trying to DELETE the table which has not implements the related interface. Currently, there is no existing connector maintained by flink has supported DELETE yet.</description>
    </item>
    <item>
      <title>Time Travel</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/time-travel/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/queries/time-travel/</guid>
      <description>Time Travel # Batch Streaming&#xA;The syntax of time travel is used for querying historical data. It allows users to specify a point in time and query the corresponding table data.&#xA;Attention Currently, time travel requires the corresponding catalog that the table belongs to implementing the getTable(ObjectPath tablePath, long timestamp) method. See more details in Catalog.&#xA;The syntax with time travel clause is:&#xA;SELECT select_list FROM table_name FOR SYSTEM_TIME AS OF timestamp_expression Parameter Specification:</description>
    </item>
    <item>
      <title>CALL Statements</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/call/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sql/call/</guid>
      <description>Call Statements # Call statements are used to call a stored procedure which is usually provided to perform data manipulation or administrative tasks.&#xA;Attention Currently, Call statements require the procedure called to exist in the corresponding catalog. So, please make sure the procedure exists in the catalog. If it doesn&amp;rsquo;t exist, it&amp;rsquo;ll throw an exception. You may need to refer to the doc of the catalog to see the available procedures.</description>
    </item>
    <item>
      <title>Disaggregated State Management</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/disaggregated_state/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/disaggregated_state/</guid>
      <description>Disaggregated State Management # Overview # For the first ten years of Flink, the state management is based on memory or local disk of the TaskManager. This approach works well for most use cases, but it has some limitations:&#xA;Local Disk Constraints: The state size is limited by the memory or disk size of the TaskManager. Spiky Resource Usage: The local state model triggers periodic CPU and network I/O bursts during checkpointing or SST files compaction.</description>
    </item>
    <item>
      <title>Building Flink from Source</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/flinkdev/building/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/flinkdev/building/</guid>
      <description>Building Flink from Source # This page covers how to build Flink 2.2.0 from sources.&#xA;Build Flink # In order to build Flink you need the source code. Either download the source of a release or clone the git repository.&#xA;In addition you need Maven 3.8.6 and a JDK (Java Development Kit). Flink requires Java 11 to build.&#xA;To clone from git, enter:&#xA;git clone https://github.com/apache/flink.git The simplest way of building Flink is by running:</description>
    </item>
    <item>
      <title>Data Types</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/types/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/types/</guid>
      <description>Data Types # Flink SQL has a rich set of native data types available to users.&#xA;Data Type # A data type describes the logical type of a value in the table ecosystem. It can be used to declare input and/or output types of operations.&#xA;Flink&amp;rsquo;s data types are similar to the SQL standard&amp;rsquo;s data type terminology but also contain information about the nullability of a value for efficient handling of scalar expressions.</description>
    </item>
    <item>
      <title>Program Packaging</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/execution/packaging/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/execution/packaging/</guid>
      <description>Program Packaging and Distributed Execution # As described earlier, Flink programs can be executed on clusters by using a remote environment. Alternatively, programs can be packaged into JAR Files (Java Archives) for execution. Packaging the program is a prerequisite to executing them through the command line interface.&#xA;Packaging Programs # To support execution from a packaged JAR file via the command line or web interface, a program must use the environment obtained by StreamExecutionEnvironment.</description>
    </item>
    <item>
      <title>Table API Tutorial</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table_api_tutorial/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table_api_tutorial/</guid>
      <description>Table API Tutorial # Apache Flink offers a Table API as a unified, relational API for batch and stream processing, i.e., queries are executed with the same semantics on unbounded, real-time streams or bounded, batch data sets and produce the same results. The Table API in Flink is commonly used to ease the definition of data analytics, data pipelining, and ETL applications.&#xA;What Will You Be Building? # In this tutorial, you will learn how to build a pure Python Flink Table API pipeline.</description>
    </item>
    <item>
      <title>DataStream API Tutorial</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream_tutorial/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream_tutorial/</guid>
      <description>DataStream API Tutorial # Apache Flink offers a DataStream API for building robust, stateful streaming applications. It provides fine-grained control over state and time, which allows for the implementation of advanced event-driven systems. In this step-by-step guide, you’ll learn how to build a simple streaming application with PyFlink and the DataStream API.&#xA;What Will You Be Building? # In this tutorial, you will learn how to write a simple Python DataStream pipeline.</description>
    </item>
    <item>
      <title>Time Zone</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/timezone/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/timezone/</guid>
      <description>Time Zone # Flink provides rich data types for Date and Time, including DATE, TIME, TIMESTAMP, TIMESTAMP_LTZ, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND (please see Date and Time for detailed information). Flink supports setting time zone in session level (please see table.local-time-zone for detailed information). These timestamp data types and time zone support of Flink make it easy to process business data across time zones.&#xA;TIMESTAMP vs TIMESTAMP_LTZ # TIMESTAMP type # TIMESTAMP(p) is an abbreviation for TIMESTAMP(p) WITHOUT TIME ZONE, the precision p supports range is from 0 to 9, 6 by default.</description>
    </item>
    <item>
      <title>Data Types</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/data_types/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/data_types/</guid>
      <description>Data Types # In Apache Flink&amp;rsquo;s Python DataStream API, a data type describes the type of a value in the DataStream ecosystem. It can be used to declare input and output types of operations and informs the system how to serialize elements.&#xA;Pickle Serialization # If the type has not been declared, data would be serialized or deserialized using Pickle. For example, the program below specifies no data types.</description>
    </item>
    <item>
      <title>Intro to the Python Table API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/intro_to_table_api/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/intro_to_table_api/</guid>
      <description>Intro to the Python Table API # This document is a short introduction to the PyFlink Table API, which is used to help novice users quickly understand the basic usage of PyFlink Table API. For advanced usage, please refer to other documents in this user guide.&#xA;Common Structure of Python Table API Program # All Table API and SQL programs, both batch and streaming, follow the same pattern. The following code example shows the common structure of Table API and SQL programs.</description>
    </item>
    <item>
      <title>TableEnvironment</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/table_environment/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/table_environment/</guid>
      <description>TableEnvironment # This document is an introduction of PyFlink TableEnvironment. It includes detailed descriptions of every public interface of the TableEnvironment class.&#xA;Create a TableEnvironment # The recommended way to create a TableEnvironment is to create from an EnvironmentSettings object:&#xA;from pyflink.common import Configuration from pyflink.table import EnvironmentSettings, TableEnvironment # create a streaming TableEnvironment config = Configuration() config.set_string(&amp;#39;execution.buffer-timeout&amp;#39;, &amp;#39;1 min&amp;#39;) env_settings = EnvironmentSettings \ .new_instance() \ .in_streaming_mode() \ .with_configuration(config) \ .</description>
    </item>
    <item>
      <title>Dependency Management</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/dependency_management/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/dependency_management/</guid>
      <description>Dependency Management # There are requirements to use dependencies inside the Python API programs. For example, users may need to use third-party Python libraries in Python user-defined functions. In addition, in scenarios such as machine learning prediction, users may want to load a machine learning model inside the Python user-defined functions.&#xA;When the PyFlink job is executed locally, users could install the third-party Python libraries into the local Python environment, download the machine learning model to local, etc.</description>
    </item>
    <item>
      <title>Overview</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/operations/operations/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/operations/operations/</guid>
      <description> </description>
    </item>
    <item>
      <title>Parallel Execution</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/execution/parallel/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/execution/parallel/</guid>
      <description>Parallel Execution # This section describes how the parallel execution of programs can be configured in Flink. A Flink program consists of multiple tasks (transformations/operators, data sources, and sinks). A task is split into several parallel instances for execution and each parallel instance processes a subset of the task&amp;rsquo;s input data. The number of parallel instances of a task is called its parallelism.&#xA;If you want to use savepoints you should also consider setting a maximum parallelism (or max parallelism).</description>
    </item>
    <item>
      <title>Row-based Operations</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/operations/row_based_operations/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/operations/row_based_operations/</guid>
      <description>Row-based Operations # This page describes how to use row-based operations in PyFlink Table API.&#xA;Map # Performs a map operation with a python general scalar function or vectorized scalar function. The output will be flattened if the output type is a composite type.&#xA;from pyflink.common import Row from pyflink.table import EnvironmentSettings, TableEnvironment from pyflink.table.expressions import col from pyflink.table.udf import udf env_settings = EnvironmentSettings.in_batch_mode() table_env = TableEnvironment.create(env_settings) table = table_env.</description>
    </item>
    <item>
      <title>State</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/state/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/datastream/state/</guid>
      <description> </description>
    </item>
    <item>
      <title>Table API</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/tableapi/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/tableapi/</guid>
      <description>Table API # The Table API is a unified, relational API for stream and batch processing. Table API queries can be run on batch or streaming input without modifications. The Table API is a super set of the SQL language and is specially designed for working with Apache Flink. The Table API is a language-integrated API for Scala, Java and Python. Instead of specifying queries as String values as common with SQL, Table API queries are defined in a language-embedded style in Java, Scala or Python with IDE support like autocompletion and syntax validation.</description>
    </item>
    <item>
      <title>Data Types</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/python_types/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/python_types/</guid>
      <description>Data Types # This page describes the data types supported in PyFlink Table API.&#xA;Data Type # A data type describes the logical type of a value in the table ecosystem. It can be used to declare input and/or output types of Python user-defined functions. Users of the Python Table API work with instances of pyflink.table.types.DataType within the Python Table API or when defining user-defined functions.&#xA;A DataType instance declares the logical type which does not imply a concrete physical representation for transmission or storage.</description>
    </item>
    <item>
      <title>System (Built-in) Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/system_functions/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/system_functions/</guid>
      <description> </description>
    </item>
    <item>
      <title>System (Built-in) Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/functions/systemfunctions/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/functions/systemfunctions/</guid>
      <description>System (Built-in) Functions # Flink Table API &amp;amp; SQL provides users with a set of built-in functions for data transformations. This page gives a brief overview of them. If a function that you need is not supported yet, you can implement a user-defined function. If you think that the function is general enough, please open a Jira issue for it with a detailed description.&#xA;Scalar Functions # The scalar functions take zero, one or more values as the input and return a single value as the result.</description>
    </item>
    <item>
      <title>Side Outputs</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/side_output/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/side_output/</guid>
      <description>Side Outputs # In addition to the main stream that results from DataStream operations, you can also produce any number of additional side output result streams. The type of data in the result streams does not have to match the type of data in the main stream and the types of the different side outputs can also differ. This operation can be useful when you want to split a stream of data where you would normally have to replicate the stream and then filter out from each stream the data that you don&amp;rsquo;t want to have.</description>
    </item>
    <item>
      <title>Execution Mode</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/python_execution_mode/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/python_execution_mode/</guid>
      <description>Execution Mode # The Python API supports different runtime execution modes from which you can choose depending on the requirements of your use case and the characteristics of your job. The Python runtime execution mode defines how the Python user-defined functions will be executed.&#xA;Prior to release-1.15, there is the only execution mode called PROCESS execution mode. The PROCESS mode means that the Python user-defined functions will be executed in separate Python processes.</description>
    </item>
    <item>
      <title>Conversions between PyFlink Table and Pandas DataFrame</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/conversion_of_pandas/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/conversion_of_pandas/</guid>
      <description>Conversions between PyFlink Table and Pandas DataFrame # PyFlink Table API supports conversion between PyFlink Table and Pandas DataFrame.&#xA;Convert Pandas DataFrame to PyFlink Table # Pandas DataFrames can be converted into a PyFlink Table. Internally, PyFlink will serialize the Pandas DataFrame using Arrow columnar format on the client. The serialized data will be processed and deserialized in Arrow source during execution. The Arrow source can also be used in streaming jobs, and is integrated with checkpointing to provide exactly-once guarantees.</description>
    </item>
    <item>
      <title>Conversions between Table and DataStream</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/conversion_of_data_stream/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/conversion_of_data_stream/</guid>
      <description> </description>
    </item>
    <item>
      <title>Procedures</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/procedures/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/procedures/</guid>
      <description>Procedures # Flink Table API &amp;amp; SQL empowers users to perform data manipulation and administrative tasks with procedures. Procedures can run FLINK jobs with the provided StreamExecutionEnvironment, making them more powerful and flexible.&#xA;Implementation Guide # To call a procedure, it must be available in a catalog. To provide procedures in a catalog, you need to implement the procedure and then return it using the Catalog.getProcedure(ObjectPath procedurePath) method. The following steps will guild you on how to implement and provide a procedure in a catalog.</description>
    </item>
    <item>
      <title>SQL</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/sql/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/sql/</guid>
      <description> </description>
    </item>
    <item>
      <title>Handling Application Parameters</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/application_parameters/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/application_parameters/</guid>
      <description>Handling Application Parameters # Handling Application Parameters # Almost all Flink applications, both batch and streaming, rely on external configuration parameters. They are used to specify input and output sources (like paths or addresses), system parameters (parallelism, runtime configuration), and application specific parameters (typically used within user functions).&#xA;Flink provides a simple utility called ParameterTool to provide some basic tooling for solving these problems. Please note that you don&amp;rsquo;t have to use the ParameterTool described here.</description>
    </item>
    <item>
      <title>Task Failure Recovery</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/task_failure_recovery/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/ops/state/task_failure_recovery/</guid>
      <description>Task Failure Recovery # When a task failure happens, Flink needs to restart the failed task and other affected tasks to recover the job to a normal state.&#xA;Restart strategies and failover strategies are used to control the task restarting. Restart strategies decide whether and when the failed/affected tasks can be restarted. Failover strategies decide which tasks should be restarted to recover the job.&#xA;Restart Strategies # The cluster can be started with a default restart strategy which is always used when no job specific restart strategy has been defined.</description>
    </item>
    <item>
      <title>User-defined Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/functions/udfs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/functions/udfs/</guid>
      <description>User-defined Functions # User-defined functions (UDFs) are extension points to call frequently used logic or custom logic that cannot be expressed otherwise in queries.&#xA;User-defined functions can be implemented in a JVM language (such as Java or Scala) or Python. An implementer can use arbitrary third party libraries within a UDF. This page will focus on JVM-based languages, please refer to the PyFlink documentation for details on writing general and vectorized UDFs in Python.</description>
    </item>
    <item>
      <title>Process Table Functions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/functions/ptfs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/functions/ptfs/</guid>
      <description>Process Table Functions (PTFs) # Process Table Functions (PTFs) are the most powerful function kind for Flink SQL and Table API. They enable implementing user-defined operators that can be as feature-rich as built-in operations. PTFs can take (partitioned) tables to produce a new table. They have access to Flink&amp;rsquo;s managed state, event-time and timer services, and underlying table changelogs.&#xA;Conceptually, a PTF is itself a user-defined function that is a superset of all other user-defined functions.</description>
    </item>
    <item>
      <title>Catalogs</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/catalogs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/catalogs/</guid>
      <description> </description>
    </item>
    <item>
      <title>Modules</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/modules/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/modules/</guid>
      <description>Modules # Modules allow users to extend Flink&amp;rsquo;s built-in objects, such as defining functions that behave like Flink built-in functions. They are pluggable, and while Flink provides a few pre-built modules, users can write their own.&#xA;For example, users can define their own geo functions and plug them into Flink as built-in functions to be used in Flink SQL and Table APIs. Another example is users can load an out-of-shelf Hive module to use Hive built-in functions as Flink built-in functions.</description>
    </item>
    <item>
      <title>Catalogs</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/catalogs/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/catalogs/</guid>
      <description>Catalogs # Catalogs provide metadata, such as databases, tables, partitions, views, and functions and information needed to access data stored in a database or other external systems.&#xA;One of the most crucial aspects of data processing is managing metadata. It may be transient metadata like temporary tables, or UDFs registered against the table environment. Or permanent metadata, like that in a Hive Metastore. Catalogs provide a unified API for managing metadata and making it accessible from the Table API and SQL Queries.</description>
    </item>
    <item>
      <title>Flink JDBC Driver</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/jdbcdriver/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/jdbcdriver/</guid>
      <description>Flink JDBC Driver # The Flink JDBC Driver is a Java library for enabling clients to send Flink SQL to your Flink cluster via the SQL Gateway.&#xA;You can also use the Hive JDBC Driver with Flink. This is beneficial if you are running Hive dialect SQL and want to make use of the Hive Catalog. To use Hive JDBC with Flink you need to run the SQL Gateway with the HiveServer2 endpoint.</description>
    </item>
    <item>
      <title>OLAP Quickstart</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/olap_quickstart/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/olap_quickstart/</guid>
      <description>OLAP Quickstart # OLAP (OnLine Analysis Processing) is a key technology in the field of data analysis, it is generally used to perform complex queries on large data sets with latencies in seconds. Now Flink can not only support streaming and batch computing, but also supports users to deploy it as an OLAP computing service. This page will show you how to quickly set up a local Flink OLAP service, and will also introduce some best practices helping you deploy Flink OLAP service in production.</description>
    </item>
    <item>
      <title>SQL Client</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sqlclient/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sqlclient/</guid>
      <description>SQL Client # Flink’s Table &amp;amp; SQL API makes it possible to work with queries written in the SQL language, but these queries need to be embedded within a table program that is written in either Java or Scala. Moreover, these programs need to be packaged with a build tool before being submitted to a cluster. This more or less limits the usage of Flink to Java/Scala programmers.&#xA;The SQL Client aims to provide an easy way of writing, debugging, and submitting table programs to a Flink cluster without a single line of Java or Scala code.</description>
    </item>
    <item>
      <title>Download</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/models/downloads/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/models/downloads/</guid>
      <description> SQL Models download page # The page contains links to optional SQL Client models that are not part of the binary distribution.&#xA;Optional SQL models # Name Version Download Link OpenAI Download (asc, sha1) </description>
    </item>
    <item>
      <title>Download</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/downloads/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/connectors/table/downloads/</guid>
      <description>SQL Connectors download page # The page contains links to optional SQL Client connectors and formats that are not part of the binary distribution.&#xA;Optional SQL formats # Name Download link Avro Download (asc, sha1) Avro Schema Registry Download (asc, sha1) Debezium Download (asc, sha1) ORC Download (asc, sha1) Parquet Download (asc, sha1) Protobuf Download (asc, sha1) Optional SQL connectors # Name Version Download Link Amazon Kinesis Data Firehose universal Download (asc, sha1) Amazon DynamoDB universal Download (asc, sha1) Elasticsearch 6.</description>
    </item>
    <item>
      <title>Network Buffer Tuning</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/network_mem_tuning/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/memory/network_mem_tuning/</guid>
      <description>Network memory tuning guide # Overview # Each record in Flink is sent to the next subtask compounded with other records in a network buffer, the smallest unit for communication between subtasks. In order to maintain consistent high throughput, Flink uses network buffer queues (also known as in-flight data) on the input and output side of the transmission process.&#xA;Each subtask has an input queue waiting to consume data and an output queue waiting to send data to the next subtask.</description>
    </item>
    <item>
      <title>Testing</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/testing/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/testing/</guid>
      <description>Testing # Testing is an integral part of every software development process as such Apache Flink comes with tooling to test your application code on multiple levels of the testing pyramid.&#xA;Testing User-Defined Functions # Usually, one can assume that Flink produces correct results outside of a user-defined function. Therefore, it is recommended to test those classes that contain the main business logic with unit tests as much as possible.</description>
    </item>
    <item>
      <title>Experimental Features</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/experimental/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/experimental/</guid>
      <description>Experimental Features # This section describes experimental features in the DataStream API. Experimental features are still evolving and can be either unstable, incomplete, or subject to heavy change in future versions.&#xA;Reinterpreting a pre-partitioned data stream as keyed stream # We can re-interpret a pre-partitioned data stream as a keyed stream to avoid shuffling.&#xA;WARNING: The re-interpreted data stream MUST already be pre-partitioned in EXACTLY the same way Flink&amp;rsquo;s keyBy would partition the data in a shuffle w.</description>
    </item>
    <item>
      <title>Configuration</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/config/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/config/</guid>
      <description>Configuration # By default, the Table &amp;amp; SQL API is preconfigured for producing accurate results with acceptable performance.&#xA;Depending on the requirements of a table program, it might be necessary to adjust certain parameters for optimization. For example, unbounded streaming programs may need to ensure that the required state size is capped (see streaming concepts).&#xA;Overview # When instantiating a TableEnvironment, EnvironmentSettings can be used to pass the desired configuration for the current session, by passing a Configuration object to the EnvironmentSettings.</description>
    </item>
    <item>
      <title>Metrics</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/metrics/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/metrics/</guid>
      <description>Metrics # PyFlink exposes a metric system that allows gathering and exposing metrics to external systems.&#xA;Registering metrics # You can access the metric system from a Python user-defined function by calling function_context.get_metric_group() in the open method. The get_metric_group() method returns a MetricGroup object on which you can create and register new metrics.&#xA;Metric types # PyFlink supports Counters, Gauges, Distribution and Meters.&#xA;Counter # A Counter is used to count something.</description>
    </item>
    <item>
      <title>Performance Tuning</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/tuning/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/tuning/</guid>
      <description>Performance Tuning # SQL is the most widely used language for data analytics. Flink&amp;rsquo;s Table API and SQL enables users to define efficient stream analytics applications in less time and effort. Moreover, Flink Table API and SQL is effectively optimized, it integrates a lot of query optimizations and tuned operator implementations. But not all of the optimizations are enabled by default, so for some workloads, it is possible to improve performance by turning on some options.</description>
    </item>
    <item>
      <title>Configuration</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/python_config/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/python_config/</guid>
      <description>Configuration # Depending on the requirements of a Python API program, it might be necessary to adjust certain parameters for optimization.&#xA;For Python DataStream API program, the config options could be set as following:&#xA;from pyflink.common import Configuration from pyflink.datastream import StreamExecutionEnvironment config = Configuration() config.set_integer(&amp;#34;python.fn-execution.bundle.size&amp;#34;, 1000) env = StreamExecutionEnvironment.get_execution_environment(config) For Python Table API program, all the config options available for Java/Scala Table API program could also be used in the Python Table API program.</description>
    </item>
    <item>
      <title>Debugging</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/debugging/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/debugging/</guid>
      <description>Debugging # This page describes how to debug in PyFlink.&#xA;Logging Infos # Client Side Logging # You can log contextual and debug information via print or standard Python logging modules in PyFlink jobs in places outside Python UDFs. The logging messages will be printed in the log files of the client during job submission.&#xA;from pyflink.table import EnvironmentSettings, TableEnvironment # create a TableEnvironment env_settings = EnvironmentSettings.in_streaming_mode() table_env = TableEnvironment.</description>
    </item>
    <item>
      <title>Connectors</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/python_table_api_connectors/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/table/python_table_api_connectors/</guid>
      <description>Connectors # This page describes how to use connectors in PyFlink and highlights the details to be aware of when using Flink connectors in Python programs.&#xA;Note For general connector information and common configuration, please refer to the corresponding Java/Scala documentation.&#xA;Download connector and format jars # Since Flink is a Java/Scala-based project, for both connectors and formats, implementations are available as jars that need to be specified as job dependencies.</description>
    </item>
    <item>
      <title>Environment Variables</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/environment_variables/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/environment_variables/</guid>
      <description>Environment Variables # These environment variables will affect the behavior of PyFlink:&#xA;Environment Variable Description FLINK_HOME PyFlink job will be compiled before submitting and it requires Flink&#39;s distribution to compile the job. PyFlink&#39;s installation package already contains Flink&#39;s distribution and it&#39;s used by default. This environment allows you to specify a custom Flink&#39;s distribution. PYFLINK_CLIENT_EXECUTABLE The path of the Python interpreter used to launch the Python process when submitting the Python jobs via &#34;</description>
    </item>
    <item>
      <title>User-defined Sources &amp; Sinks</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sourcessinks/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/sourcessinks/</guid>
      <description>User-defined Sources &amp;amp; Sinks # Dynamic tables are the core concept of Flink&amp;rsquo;s Table &amp;amp; SQL API for processing both bounded and unbounded data in a unified fashion.&#xA;Because dynamic tables are only a logical concept, Flink does not own the data itself. Instead, the content of a dynamic table is stored in external systems (such as databases, key-value stores, message queues) or files.&#xA;Dynamic sources and dynamic sinks can be used to read and write data from and to an external system.</description>
    </item>
    <item>
      <title>FAQ</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/faq/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/python/faq/</guid>
      <description>FAQ # This page describes the solutions to some common questions for PyFlink users.&#xA;Preparing Python Virtual Environment # You can download a convenience script to prepare a Python virtual env zip which can be used on Mac OS and most Linux distributions. You can specify the PyFlink version to generate a Python virtual environment required for the corresponding PyFlink version, otherwise the most recent version will be installed.</description>
    </item>
    <item>
      <title>Java Lambda Expressions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/java_lambdas/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/datastream/java_lambdas/</guid>
      <description>Java Lambda Expressions # Java 8 introduced several new language features designed for faster and clearer coding. With the most important feature, the so-called &amp;ldquo;Lambda Expressions&amp;rdquo;, it opened the door to functional programming. Lambda expressions allow for implementing and passing functions in a straightforward way without having to declare additional (anonymous) classes.&#xA;Flink supports the usage of lambda expressions for all operators of the Java API, however, whenever a lambda expression uses Java generics you need to declare type information explicitly.</description>
    </item>
    <item>
      <title>Temporal Table Function</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/temporal_table_function/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/dev/table/concepts/temporal_table_function/</guid>
      <description>Temporal Table Function # A Temporal table function provides access to the version of a temporal table at a specific point in time. In order to access the data in a temporal table, one must pass a time attribute that determines the version of the table that will be returned. Flink uses the SQL syntax of table functions to provide a way to express it.&#xA;Unlike a versioned table, temporal table functions can only be defined on top of append-only streams — it does not support changelog inputs.</description>
    </item>
    <item>
      <title>Failure Enrichers</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/failure_enrichers/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/failure_enrichers/</guid>
      <description>Custom failure enrichers # Flink provides a pluggable interface for users to register their custom logic and enrich failures with extra metadata labels (string key-value pairs). This enables users to implement their own failure enrichment plugins to categorize job failures, expose custom metrics, or make calls to external notification systems.&#xA;FailureEnrichers are triggered every time an exception is reported at runtime by the JobManager. Every FailureEnricher may asynchronously return labels associated with the failure that are then exposed via the JobManager&amp;rsquo;s REST API (e.</description>
    </item>
    <item>
      <title>Job Status Changed Listener</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/job_status_listener/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/docs/deployment/advanced/job_status_listener/</guid>
      <description>Job status changed listener # Flink provides a pluggable interface for users to register their custom logic for handling with the job status changes in which lineage info about source/sink is provided. This enables users to implement their own flink lineage reporter to send lineage info to third party data lineage systems for example Datahub and Openlineage.&#xA;The job status changed listeners are triggered every time status change happened for the application.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.10</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.10/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.10/</guid>
      <description>Release Notes - Flink 1.10 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.9 and Flink 1.10. Please read these notes carefully if you are planning to upgrade your Flink version to 1.10.&#xA;Clusters &amp;amp; Deployment # FileSystems should be loaded via Plugin Architecture # FLINK-11956 # s3-hadoop and s3-presto filesystems do no longer use class relocations and need to be loaded through plugins but now seamlessly integrate with all credential providers.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.11</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.11/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.11/</guid>
      <description>Release Notes - Flink 1.11 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.10 and Flink 1.11. Please read these notes carefully if you are planning to upgrade your Flink version to 1.11.&#xA;Clusters &amp;amp; Deployment # Support for Application Mode # FLIP-85 # The user can now submit applications and choose to execute their main() method on the cluster rather than the client.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.12</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.12/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.12/</guid>
      <description>Release Notes - Flink 1.12 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.11 and Flink 1.12. Please read these notes carefully if you are planning to upgrade your Flink version to 1.12.&#xA;Known Issues # Unaligned checkpoint recovery may lead to corrupted data stream # FLINK-20654 # Using unaligned checkpoints in Flink 1.12.0 combined with two/multiple inputs tasks or with union inputs for single input tasks can result in corrupted state.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.13</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.13/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.13/</guid>
      <description>Release Notes - Flink 1.13 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.12 and Flink 1.13. Please read these notes carefully if you are planning to upgrade your Flink version to 1.13.&#xA;Failover # Remove state.backend.async option. # FLINK-21935 # The state.backend.async option is deprecated. Snapshots are always asynchronous now (as they were by default before) and there is no option to configure a synchronous snapshot any more.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.14</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.14/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.14/</guid>
      <description>Release notes - Flink 1.14 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.13 and Flink 1.14. Please read these notes carefully if you are planning to upgrade your Flink version to 1.14.&#xA;Known issues # State migration issues # Some of our internal serializers i.e. RowSerializer, TwoPhaseCommitSinkFunction&amp;rsquo;s serializer, LinkedListSerializer might prevent a successful job starts if state migration is necessary.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.15</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.15/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.15/</guid>
      <description>Release notes - Flink 1.15 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.14 and Flink 1.15. Please read these notes carefully if you are planning to upgrade your Flink version to 1.15.&#xA;Summary of changed dependency names # There are Several changes in Flink 1.15 that require updating dependency names when upgrading from earlier versions, mainly including the effort to opting-out Scala dependencies from non-scala modules and reorganize table modules.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.16</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.16/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.16/</guid>
      <description>Release notes - Flink 1.16 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.15 and Flink 1.16. Please read these notes carefully if you are planning to upgrade your Flink version to 1.16.&#xA;Clusters &amp;amp; Deployment # Deprecate host/web-ui-port parameter of jobmanager.sh # FLINK-28735 # The host/web-ui-port parameters of the jobmanager.sh script have been deprecated. These can (and should) be specified with the corresponding options as dynamic properties.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.17</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.17/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.17/</guid>
      <description>Release notes - Flink 1.17 # These release notes discuss important aspects, such as configuration, behavior or dependencies, that changed between Flink 1.16 and Flink 1.17. Please read these notes carefully if you are planning to upgrade your Flink version to 1.17.&#xA;Clusters &amp;amp; Deployment # Only one Zookeeper version is bundled in flink-dist # FLINK-30237 # The Flink distribution no longer bundles 2 different Zookeeper client jars (one in lib, one in lib/opt respectively).</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.18</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.18/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.18/</guid>
      <description>Release notes - Flink 1.18 # These release notes discuss important aspects, such as configuration, behavior or dependencies, that changed between Flink 1.17 and Flink 1.18. Please read these notes carefully if you are planning to upgrade your Flink version to 1.18.&#xA;Build System # Support Java 17 (LTS) # FLINK-15736 # Apache Flink was made ready to compile and run with Java 17 (LTS). This feature is still in beta mode.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.19</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.19/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.19/</guid>
      <description>Release notes - Flink 1.19 # These release notes discuss important aspects, such as configuration, behavior or dependencies, that changed between Flink 1.18 and Flink 1.19. Please read these notes carefully if you are planning to upgrade your Flink version to 1.19.&#xA;Dependency upgrades # Drop support for python 3.7 # FLINK-33029 # Add support for python 3.11 # FLINK-33030 # Build System # Support Java 21 # FLINK-33163 # Apache Flink was made ready to compile and run with Java 21.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.20</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.20/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.20/</guid>
      <description>Release notes - Flink 1.20 # These release notes discuss important aspects, such as configuration, behavior or dependencies, that changed between Flink 1.19 and Flink 1.20. Please read these notes carefully if you are planning to upgrade your Flink version to 1.20.&#xA;Checkpoints # Unified File Merging Mechanism for Checkpoints # FLINK-32070 # The unified file merging mechanism for checkpointing is introduced to Flink 1.20 as an MVP (&amp;ldquo;minimum viable product&amp;rdquo;) feature.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.5</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.5/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.5/</guid>
      <description>Release Notes - Flink 1.5 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.4 and Flink 1.5. Please read these notes carefully if you are planning to upgrade your Flink version to 1.5.&#xA;Update Configuration for Reworked Job Deployment # Flink’s reworked cluster and job deployment component improves the integration with resource managers and enables dynamic resource allocation. One result of these changes is, that you no longer have to specify the number of containers when submitting applications to YARN and Mesos.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.6</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.6/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.6/</guid>
      <description>Release Notes - Flink 1.6 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.5 and Flink 1.6. Please read these notes carefully if you are planning to upgrade your Flink version to 1.6.&#xA;Changed Configuration Default Values # The default value of the slot idle timeout slot.idle.timeout is set to the default value of the heartbeat timeout (50 s).&#xA;Changed Elasticsearch 5.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.7</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.7/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.7/</guid>
      <description>Release Notes - Flink 1.7 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.6 and Flink 1.7. Please read these notes carefully if you are planning to upgrade your Flink version to 1.7.&#xA;Scala 2.12 support # When using Scala 2.12 you might have to add explicit type annotations in places where they were not required when using Scala 2.11. This is an excerpt from the TransitiveClosureNaive.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.8</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.8/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.8/</guid>
      <description>Release Notes - Flink 1.8 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.7 and Flink 1.8. Please read these notes carefully if you are planning to upgrade your Flink version to 1.8.&#xA;State # Continuous incremental cleanup of old Keyed State with TTL # We introduced TTL (time-to-live) for Keyed state in Flink 1.6 (FLINK-9510). This feature allowed to clean up and make inaccessible keyed state entries when accessing them.</description>
    </item>
    <item>
      <title>Release Notes - Flink 1.9</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.9/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-1.9/</guid>
      <description>Release Notes - Flink 1.9 # These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.8 and Flink 1.9. It also provides an overview on known shortcoming or limitations with new experimental features introduced in 1.9.&#xA;Please read these notes carefully if you are planning to upgrade your Flink version to 1.9.&#xA;Known shortcomings or limitations for new features # New Table / SQL Blink planner # Flink 1.</description>
    </item>
    <item>
      <title>Release Notes - Flink 2.0</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-2.0/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-2.0/</guid>
      <description>Release notes - Flink 2.0 # These release notes discuss important aspects, such as configuration, behavior or dependencies, that changed between Flink 1.20 and Flink 2.0. Please read these notes carefully if you are planning to upgrade your Flink version to 2.0.&#xA;New Features &amp;amp; Behavior Changes # State &amp;amp; Checkpoints # Disaggregated State Storage and Management # FLINK-32070 # The past decade has witnessed a dramatic shift in Flink&amp;rsquo;s deployment mode, workload patterns, and hardware improvements.</description>
    </item>
    <item>
      <title>Release Notes - Flink 2.1</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-2.1/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-2.1/</guid>
      <description>Release notes - Flink 2.1 # These release notes discuss important aspects, such as configuration, behavior or dependencies, that changed between Flink 2.0 and Flink 2.1. Please read these notes carefully if you are planning to upgrade your Flink version to 2.1.&#xA;Table SQL / API # Model DDLs using Table API # FLINK-37548 # Since Flink 2.0, we have introduced dedicated syntax for AI models, enabling users to define models as easily as creating catalog objects and invoke them like standard functions or table functions in SQL statements.</description>
    </item>
    <item>
      <title>Release Notes - Flink 2.2</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-2.2/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/release-notes/flink-2.2/</guid>
      <description>Release notes - Flink 2.2 # These release notes discuss important aspects, such as configuration, behavior or dependencies, that changed between Flink 2.1 and Flink 2.2. Please read these notes carefully if you are planning to upgrade your Flink version to 2.2.&#xA;Table SQL / API # Support VECTOR_SEARCH in Flink SQL # FLINK-38422 # Apache Flink has supported leveraging LLM capabilities through the ML_PREDICT function in Flink SQL since version 2.</description>
    </item>
    <item>
      <title>Versions</title>
      <link>//nightlies.apache.org/flink/flink-docs-release-2.2/versions/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>//nightlies.apache.org/flink/flink-docs-release-2.2/versions/</guid>
      <description> Versions # An appendix of hosted documentation for all versions of Apache Flink.&#xA;v2.2 v2.1 v2.0 v1.20 v1.19 v1.18 v1.17 v1.16 v1.15 v1.14 v1.13 v1.12 v1.11 v1.10 v1.9 v1.8 v1.7 v1.6 v1.5 v1.4 v1.3 v1.2 v1.1 v1.0 </description>
    </item>
  </channel>
</rss>
