方法一:用pandas辅助frompysparkimportSparkContextfrompyspark.sqlimportSQLContextimportpandasaspdsc=SparkContext()
方法一:用pandas辅助
from pyspark import SparkContext
from pyspark.sql import SQLContext
import pandas as pd
sc = SparkContext()
sqlContext=SQLContext(sc)
df=pd.read_csv(r'game-clicks.csv')
sdf=sqlc.createDataFrame(df)
方法二:纯spark
from pyspark import SparkContext
from pyspark.sql import SQLContext
sc = SparkContext()
sqlContext = SQLContext(sc)
sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('game-clicks.csv')
以上这篇pyspark 读取csv文件创建DataFrame的两种方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持脚本之家。
pyspark csv DataFrame