欢迎关注Hadoop、Spark、Flink、Hive、Hbase、Flume等大数据资料分享微信公共账号:iteblog_hadoop
  1. 文章总数:961
  2. 浏览总数:11,484,555
  3. 评论:3873
  4. 分类目录:103 个
  5. 注册用户数:5843
  6. 最后更新:2018年10月17日
过往记忆博客公众号iteblog_hadoop
欢迎关注微信公众号:
iteblog_hadoop
大数据技术博客公众号bigdata_ai
大数据猿:
bigdata_ai

Apache Flink 1.1.3正式发布

  Apache Flink 1.1.3仍然在Flink 1.1系列基础上修复了一些Bug,推荐所有用户升级到Flink 1.1.3,只需要在你相关工程的pom.xml文件里面加入以下依赖:

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-java</artifactId>
  <version>1.1.3</version>
</dependency>
<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-streaming-java_2.10</artifactId>
  <version>1.1.3</version>
</dependency>
<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-clients_2.10</artifactId>
  <version>1.1.3</version>
</dependency>

点击下载Flink 1.1.3
使用RocksDB Backend用户需要注意:强烈推荐使用RocksDB state backend的用户使用 fully async 模式,因为使用了 fully async 模式以后可以很容易地升级到Flink 1.2,semi async 模式在Flink 1.2将不再被支持:

RocksDBStateBackend backend = new RocksDBStateBackend("...");
backend.enableFullyAsyncSnapshots();

Flink 1.1.3 Release Notes

Bug

[FLINK-2662] - CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."
[FLINK-4311] - TableInputFormat fails when reused on next split
[FLINK-4329] - Fix Streaming File Source Timestamps/Watermarks Handling
[FLINK-4485] - Finished jobs in yarn session fill /tmp filesystem
[FLINK-4513] - Kafka connector documentation refers to Flink 1.1-SNAPSHOT
[FLINK-4514] - ExpiredIteratorException in Kinesis Consumer on long catch-ups to head of stream
[FLINK-4540] - Detached job execution may prevent cluster shutdown
[FLINK-4544] - TaskManager metrics are vulnerable to custom JMX bean installation
[FLINK-4566] - ProducerFailedException does not properly preserve Exception causes
[FLINK-4588] - Fix Merging of Covering Window in MergingWindowSet
[FLINK-4589] - Fix Merging of Covering Window in MergingWindowSet
[FLINK-4616] - Kafka consumer doesn't store last emmited watermarks per partition in state
[FLINK-4618] - FlinkKafkaConsumer09 should start from the next record on startup from offsets in Kafka
[FLINK-4619] - JobManager does not answer to client when restore from savepoint fails
[FLINK-4636] - AbstractCEPPatternOperator fails to restore state
[FLINK-4640] - Serialization of the initialValue of a Fold on WindowedStream fails
[FLINK-4651] - Re-register processing time timers at the WindowOperator upon recovery.
[FLINK-4663] - Flink JDBCOutputFormat logs wrong WARN message
[FLINK-4672] - TaskManager accidentally decorates Kill messages
[FLINK-4677] - Jars with no job executions produces NullPointerException in ClusterClient
[FLINK-4702] - Kafka consumer must commit offsets asynchronously
[FLINK-4727] - Kafka 0.9 Consumer should also checkpoint auto retrieved offsets even when no data is read
[FLINK-4732] - Maven junction plugin security threat
[FLINK-4777] - ContinuousFileMonitoringFunction may throw IOException when files are moved
[FLINK-4788] - State backend class cannot be loaded, because fully qualified name converted to lower-case

Improvement

[FLINK-4396] - GraphiteReporter class not found at startup of jobmanager
[FLINK-4574] - Strengthen fetch interval implementation in Kinesis consumer
[FLINK-4723] - Unify behaviour of committed offsets to Kafka / ZK for Kafka 0.8 and 0.9 consumer
本博客文章除特别声明,全部都是原创!
转载本文请加上:转载自过往记忆(https://www.iteblog.com/)
本文链接: 【Apache Flink 1.1.3正式发布】(https://www.iteblog.com/archives/1838.html)
喜欢 (5)
分享 (0)
发表我的评论
取消评论

表情
本博客评论系统带有自动识别垃圾评论功能,请写一些有意义的评论,谢谢!