欢迎关注大数据技术架构与案例微信公众号:过往记忆大数据
过往记忆博客公众号iteblog_hadoop
欢迎关注微信公众号:
过往记忆大数据

Spark函数讲解序列文章

  本博客近日将对Spark 1.2.1 RDD中所有的函数进行讲解,主要包括函数的解释,实例以及注意事项,每日一篇请关注。以下是将要介绍的函数,按照字母的先后顺序进行介绍,可以点的说明已经发布了。
  aggregateaggregateByKeycachecartesiancheckpointcoalescecogroup
groupWith
collect, toArray
collectAsMap
combineByKey
compute
context, sparkContext
count
countApprox
countByKey
countByKeyApprox
countByValue
countByValueApprox
countApproxDistinct
countApproxDistinctByKey
dependencies
distinct
first
filter
filterWith
flatMap
flatMapValues
flatMapWith
fold
foldByKey
foreach
foreachPartition
foreachWith
generator, setGenerator
getCheckpointFile
preferredLocations
getStorageLevel
glom
groupBy
groupByKey
histogram
id
intersection
isCheckpointed
iterator
join
keyBy
keys
leftOuterJoin
lookup
map
mapPartitions
mapPartitionsWithContext
mapPartitionsWithIndex
mapPartitionsWithSplit
mapValues
mapWith
max
mean , meanApprox
min
name, setName
partitionBy
partitioner
partitions
persist, cache
pipe
randomSplit
reduce
reduceByKey, reduceByKeyLocally, reduceByKeyToDriver
rightOuterJoin
sample
saveAsHodoopFile, saveAsHadoopDataset, saveAsNewAPIHadoopFile
saveAsObjectFile
saveAsSequenceFile
saveAsTextFile
stats
sortBy
sortByKey
stdev , sampleStdev
subtract
subtractByKey
sum , sumApprox
take
takeOrdered
takeSample
toDebugString
toJavaRDD
top
toString
union, ++
unpersist
values
variance , sampleVariance
zip
zipPartitions
zipWithIndex
zipWithUniquId

本博客文章除特别声明,全部都是原创!
原创文章版权归过往记忆大数据(过往记忆)所有,未经许可不得转载。
本文链接: 【Spark函数讲解序列文章】(https://www.iteblog.com/archives/1270.html)
喜欢 (6)
分享 (0)
发表我的评论
取消评论

表情
本博客评论系统带有自动识别垃圾评论功能,请写一些有意义的评论,谢谢!