Skip to content
pvmehta.com

pvmehta.com

  • Home
  • About Me
  • Toggle search form
  • EXTPROC Oracle
  • Identical Dblink Issue… Oracle
  • Specify the Rollback segment to use in Transaction Oracle
  • good linux notes Linux/Unix
  • 556976.1 Oracle Clusterware: Components installed Oracle
  • tblwopk.sql tablewopk.sql Oracle
  • findx.sql /* Find Indexes on specified USER.TABLE_NAME */ Oracle
  • To check whether standby is recovering properly or not?? Oracle
  • chk_space_SID.ksh Linux/Unix
  • findobj.sql Oracle
  • rm_backup_arch_file.ksh Linux/Unix
  • Oracle Recommended Patches — Oracle Database ID 756671.1 Oracle
  • Optimizer SORT Operations Oracle
  • ORA-8031 issue and solution if it is occuring due to truncate. Oracle
  • proc.sql Oracle

Read CSV file using PySpark

Posted on 30-Sep-202330-Sep-2023 By Admin No Comments on Read CSV file using PySpark

from pyspark.sql.functions import col

 

# File location and type

file_location = “/FileStore/tables/sales_data_part1.csv”
file_type = “csv”

# CSV options
infer_schema = “false”
first_row_is_header = “true”
delimiter = “,”

# The applied options are for CSV files. For other file types, these will be ignored.
df = spark.read.format(file_type)
  .option(“inferSchema”, infer_schema)
  .option(“header”, first_row_is_header)
  .option(“sep”, delimiter)
  .load(file_location)

display(df)

# Renaming column names methods.

#method-1 to rename column name
# Rename single column
df1=df.withColumnRenamed(“InvoiceNo”, “InvNo”)
# Rename multiple columns
df2=df.withColumnRenamed(“StockCode”, “StkCode”).withColumnRenamed(“Quantity”, “Qty”).withColumnRenamed(“InvoiceDate”, “InvDayte”)
df.display()
df1.display()
df2.display()

 
# Method-2 for renaming columns. THis will actully reduce the number of columns from select-list.
df3 = df.selectExpr(“InvoiceNo as Inv_no”, “StockCode as stk_code”, “Description as Desc”)
df.display()
df3.display()

# Method-3 for renaming columns. THis will actully reduce the number of columns from select-list.
# # Remember: To use “col” function you need to import it using following
# from pyspark.sql.functions import col

df4 = df.select(col(“InvoiceNo”).alias(“inv”), )
df4.display()

# Create a view or table

temp_table_name = “sales_data_part1_csv”

df.createOrReplaceTempView(temp_table_name)

%sql

/* Query the created temp table in a SQL cell */

select * from sales_data_part1_csv

# With this registered as a temp view, it will only be available to this particular notebook. If you’d like other users to be able to query this table, you can also create a table from the DataFrame.
# Once saved, this table will persist across cluster restarts as well as allow various users across different notebooks to query this data.
# To do so, choose your table name and uncomment the bottom line.

permanent_table_name = “t_sales_data_part1_csv”

df.write.format(“parquet”).saveAsTable(permanent_table_name)

1.This
Notebook will be generated automatically when you load a CSV file in “DATA” section.

2.Note: Hyphen is not allowed in Table name
so replace all hyphens with underscores or other characters.

3.Parquet format is compressed Text format
and occupies much less space than CSV format. 2GB ASCII to 200MB Parquet.

4.Infer_schema=false shows all columns will come as
string data type. If Infer_schema=true then notebook will identify all datatypes and present in the table.

Python/PySpark

Post navigation

Previous Post: Read CSV File using Python
Next Post: Getting started with notebook

Related Posts

  • Read CSV File using Python Python/PySpark
  • Add new columns in dataframe Python/PySpark
  • Python class import from different folders Python/PySpark
  • Getting started with notebook Python/PySpark
  • Reading config file from other folder inside class Python/PySpark
  • How to connect to Oracle Database with Wallet with Python. Oracle

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Categories

  • Ansible (0)
  • AWS (2)
  • Azure (1)
  • Django (0)
  • GIT (1)
  • Linux/Unix (149)
  • MYSQL (5)
  • Oracle (400)
  • PHP/MYSQL/Wordpress (10)
  • POSTGRESQL (1)
  • Power-BI (0)
  • Python/PySpark (7)
  • RAC (18)
  • rman-dataguard (26)
  • shell (150)
  • SQL scripts (348)
  • SQL Server (6)
  • Uncategorized (3)
  • Videos (0)

Recent Posts

  • Running PDB on single node in RAC09-Apr-2026
  • find_arc.sql09-Apr-2026
  • pvm_pre_change.sql08-Apr-2026
  • find_encr_wallet.sql08-Apr-2026
  • find_pdbs.sql08-Apr-2026
  • Creating a Container Database using dbaascli08-Apr-2026
  • track_autoupgrade_copy_progress.sql01-Apr-2026
  • refre.sql for multitenant01-Apr-2026
  • prepfiles.sh for step by step generating pending statistics files10-Mar-2026
  • tracksqltime.sql05-Mar-2026

Archives

  • 2026
  • 2025
  • 2024
  • 2023
  • 2010
  • 2009
  • 2008
  • 2007
  • 2006
  • 2005
  • Formatter Explain plan Output 1 Oracle
  • Establishing trusted relationship between dbmonitor( central monitoring) and monitoring targets. Linux/Unix
  • Set Role explaination. Oracle
  • Shuffle an array PHP/MYSQL/Wordpress
  • Implementation of key based authentications Linux/Unix
  • Btee and Bitmap Plans in Oracle 9i and higher Oracle
  • 284785.1 How to check RAC Option is currently linked into the Oracle Binary Oracle
  • ENQ: KO – FAST OBJECT CHECKPOINT tips Oracle

Copyright © 2026 pvmehta.com.

Powered by PressBook News WordPress theme