Skip to content
pvmehta.com

pvmehta.com

  • Home
  • About Me
  • Toggle search form
  • _B_TREE_BITMAP_PLANS issue during 8.1.7 to 9.2.0.8 upgrade Oracle
  • This is im telling Kishore Oracle
  • get_ratio.sql get the ratio of users from v$session and this uses CASE-WHEN-THEN clause Oracle
  • Committing distributed transaction using commit force Oracle
  • proc.sql Oracle
  • Validating ORACLE_SID againt oratab file. Linux/Unix
  • move_arch_files.ksh Linux/Unix
  • DBMS_STATS Metalinks Notes Oracle
  • How to remove blank lines using vi editor command Linux/Unix
  • refre.sql Oracle
  • chk_space_SID.ksh Linux/Unix
  • Which environment is used by currently running process ( Very good) Linux/Unix
  • currwait.sql Oracle
  • MYSQL for Oracle DBA MYSQL
  • oracle_env_10g_CADEV Linux/Unix

Add new columns in dataframe

Posted on 30-Sep-202301-Oct-2023 By Admin No Comments on Add new columns in dataframe
from pyspark.sql.functions import col, lit

# File location and type
file_location = "/FileStore/tables/sales_data_part1.csv"
file_type = "csv"

# CSV options
infer_schema = "false"
first_row_is_header = "true"
delimiter = ","

# The applied options are for CSV files. For other file types, these will

# be ignored.
df = spark.read.format(file_type)
.option("inferSchema", infer_schema)
.option("header", first_row_is_header)
.option("sep", delimiter)
.load(file_location)

display(df)



# Adding new column with and with default values.

# Remember to import lit function from from

#pyspark.sql.functions
# Following code will add new column named

#COntinet with default value of North America

df2 = df.withColumn("Continent", lit("North America"))
df2.display()



# Adding new column based on existing row values
# Following code till add new column TotalPrice

# by multiplying Quantity and UnitPrice
df3 = df.withColumn("TotalPrice", col("Quantity")* col("UnitPrice"))
df3.display()



#Adding multiple columns
# Following code will add 2 columns,

# Total price = Quantity * UnitPrice and

# Region with default value as India
df4 = df.withColumn("TotalPrice", col("Quantity")* col("UnitPrice")).withColumn("Region", lit("India"))
df4.display()



#Adding column using SELECT
# Following code will create new DF with single column named Region and assigned

# default value of "India" to all its null values.
df5=df.select(lit("India").alias("Region"))
df5.display();



# To all all colums, you can use following code
df6=df.select(col("InvoiceNo"), col("StockCode"), col("Description"), col("Quantity"), col("InvoiceDate"), col("UnitPrice"), col("CustomerID"), col("Country"),lit("India").alias("Region"))
df6.display()

Python/PySpark

Post navigation

Previous Post: Getting started with notebook
Next Post: Load testing on Oracle 19C RAC with HammerDB

Related Posts

  • Read CSV file using PySpark Python/PySpark
  • Getting started with notebook Python/PySpark
  • Read CSV File using Python Python/PySpark
  • Python class import from different folders Python/PySpark
  • Reading config file from other folder inside class Python/PySpark
  • How to connect to Oracle Database with Wallet with Python. Oracle

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Categories

  • Ansible (0)
  • AWS (2)
  • Azure (1)
  • Django (0)
  • GIT (1)
  • Linux/Unix (149)
  • MYSQL (5)
  • Oracle (400)
  • PHP/MYSQL/Wordpress (10)
  • POSTGRESQL (1)
  • Power-BI (0)
  • Python/PySpark (7)
  • RAC (18)
  • rman-dataguard (26)
  • shell (150)
  • SQL scripts (348)
  • SQL Server (6)
  • Uncategorized (3)
  • Videos (0)

Recent Posts

  • Running PDB on single node in RAC09-Apr-2026
  • find_arc.sql09-Apr-2026
  • pvm_pre_change.sql08-Apr-2026
  • find_encr_wallet.sql08-Apr-2026
  • find_pdbs.sql08-Apr-2026
  • Creating a Container Database using dbaascli08-Apr-2026
  • track_autoupgrade_copy_progress.sql01-Apr-2026
  • refre.sql for multitenant01-Apr-2026
  • prepfiles.sh for step by step generating pending statistics files10-Mar-2026
  • tracksqltime.sql05-Mar-2026

Archives

  • 2026
  • 2025
  • 2024
  • 2023
  • 2010
  • 2009
  • 2008
  • 2007
  • 2006
  • 2005
  • pvm_pre_change.sql Oracle
  • How to analyze statspack or AWR report. Oracle
  • replacing ^M character when passing files from Windows to Unix Linux/Unix
  • Linux CPU info. Linux/Unix
  • Changing default shell Linux/Unix
  • find_pk.sql /* Find Primary Key */ Oracle
  • sbind.sql Find Bind variable from sql_id sqlid Oracle
  • replace alphabets using sed Linux/Unix

Copyright © 2026 pvmehta.com.

Powered by PressBook News WordPress theme