Single Location#
Author: Tobin Ford | tobin.ford@nrel.gov
2024
A simple object orented workflow walkthrough using pvdeg.
# if running on google colab, uncomment the next line and execute this cell to install the dependencies and prevent "ModuleNotFoundError" in later cells:
# !pip install pvdeg
import pvdeg
import os
# This information helps with debugging and getting support :)
import sys
import platform
print("Working on a ", platform.system(), platform.release())
print("Python version ", sys.version)
print("pvdeg version ", pvdeg.__version__)
Working on a Windows 11
Python version 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:49:16) [MSC v.1929 64 bit (AMD64)]
pvdeg version 0.5.1.dev623+g51cc68b8e.d20250905
Define Single Point Scenario Object#
Scenario is a general class that can be used to replace the legacy functional pvdeg analysis approach with an object orented one. Scenario can preform single location or geospatial analysis. The scenario constructor takes many arguments but the only required one for the following use cases is the name attribute. It is visible in when we display the entire scenario and is present in the file of saved information about the scenario. We also need to provide the class constructor with our API key and email.
A way around this is to provide the weather and metadata in the pipeline job arguments or you can load data from somewhere else and provide it in the same fashion.
simple_scenario = pvdeg.Scenario(
name="Point Minimum Standoff", email="user@mail.com", api_key="DEMO_KEY"
)
Adding A Location#
To add a single point using data from the Physical Solar Model (PSM3), simply feed the scenario a single coordinate in tuple form via the addLocation method. Currently this is the only way to add a location to a non-geospatial scenario, all of the other arguments are unusable when Scenario.geospatial == False.
Attempting to add a second location by calling the method again with a different coordinate pair will overwrite the old location data stored in the class instance.
simple_scenario.addLocation(
lat_long=(25.783388, -80.189029),
)
Column "relative_humidity" not found in DataFrame. Calculating...
simple_scenario.weather_data
| Year | Month | Day | Hour | Minute | temp_air | dew_point | dhi | dni | ghi | albedo | pressure | wind_direction | wind_speed | relative_humidity | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2019-01-01 00:30:00-05:00 | 2019 | 1 | 1 | 0 | 30 | 24.1 | 21.8 | 0.0 | 0.0 | 0.0 | 0.08 | 1019.0 | 128.0 | 3.6 | 87.000575 |
| 2019-01-01 01:30:00-05:00 | 2019 | 1 | 1 | 1 | 30 | 24.1 | 21.7 | 0.0 | 0.0 | 0.0 | 0.08 | 1019.0 | 126.0 | 3.5 | 86.470668 |
| 2019-01-01 02:30:00-05:00 | 2019 | 1 | 1 | 2 | 30 | 24.0 | 21.5 | 0.0 | 0.0 | 0.0 | 0.08 | 1019.0 | 124.0 | 3.4 | 85.933794 |
| 2019-01-01 03:30:00-05:00 | 2019 | 1 | 1 | 3 | 30 | 24.0 | 21.1 | 0.0 | 0.0 | 0.0 | 0.08 | 1019.0 | 122.0 | 3.4 | 83.852221 |
| 2019-01-01 04:30:00-05:00 | 2019 | 1 | 1 | 4 | 30 | 24.0 | 20.9 | 0.0 | 0.0 | 0.0 | 0.08 | 1019.0 | 121.0 | 3.5 | 82.828112 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 2024-12-31 19:30:00-05:00 | 2024 | 12 | 31 | 19 | 30 | 24.4 | 21.7 | 0.0 | 0.0 | 0.0 | 0.08 | 1020.0 | 128.0 | 4.3 | 84.929210 |
| 2024-12-31 20:30:00-05:00 | 2024 | 12 | 31 | 20 | 30 | 24.3 | 21.8 | 0.0 | 0.0 | 0.0 | 0.08 | 1020.0 | 128.0 | 4.2 | 85.963157 |
| 2024-12-31 21:30:00-05:00 | 2024 | 12 | 31 | 21 | 30 | 24.2 | 21.8 | 0.0 | 0.0 | 0.0 | 0.08 | 1020.0 | 128.0 | 4.0 | 86.480116 |
| 2024-12-31 22:30:00-05:00 | 2024 | 12 | 31 | 22 | 30 | 24.2 | 21.9 | 0.0 | 0.0 | 0.0 | 0.08 | 1019.0 | 128.0 | 3.9 | 87.009680 |
| 2024-12-31 23:30:00-05:00 | 2024 | 12 | 31 | 23 | 30 | 24.1 | 21.9 | 0.0 | 0.0 | 0.0 | 0.08 | 1019.0 | 128.0 | 3.8 | 87.533326 |
8760 rows × 15 columns
Scenario Pipelines#
The pipeline is a list of tasks called jobs for the scenario to run. We will populate the pipeline with a list of jobs before executing them all at once.
To add a job to the pipeline use the updatePipeline method. Two examples of adding functions to the pipeline will be shown below.
Adding a job without function arguments#
The simplest case of adding a job to the pipeline is when it only requires us to provide simple weather and metadata. In the function definition and docstring these appear as weather_df and meta. Since these attributes are contained in our scenario class instance we do not have to worry about them. We can simply add the function as shown below.
simple_scenario.addJob(func=pvdeg.standards.standoff)
Adding a job with function arguments#
When adding a job that contains a function requiring other arguments such as solder_fatigue which requires a value for wind_factor, we will need to provide it. The most straightforeward way to do this is using a kwargs dictionary and passing it to the function. We do not unpack the dictionary before passing it. This is done inside of the scenario at pipeline runtime (when runPipeline is called).
kwargs = {"wind_factor": 0.33}
simple_scenario.addJob(func=pvdeg.fatigue.solder_fatigue, func_kwarg=kwargs)
Adding a job with weather and metadata from outside of the class#
Not functional#
could just directly set weather data with scenario.weather_data = weather and scenario.meta_data = meta but that would only work for all of the jobs in the pipeline
Say local weather data is available or other, if we want to use this rather than the PSM3 data at a latitude and longitude we can also provide the weather and metadata in the function arguments. This is probably the best if avoided but follows the same syntax as providing other function arguments. See the example below.
PSM_FILE = os.path.join(pvdeg.DATA_DIR, "psm3_demo.csv")
weather, meta = pvdeg.weather.read(PSM_FILE, "psm")
kwargs = {"weather_df": weather, "meta": meta}
simple_scenario.addJob(func=pvdeg.standards.standoff, func_kwarg=kwargs)
# FIX THIS CASE IN SCENARIO CLASS
# (simple_scenario.pipeline[1]['job'])(**simple_scenario.pipeline[1]['params'])
View Scenario#
The viewScenario method provides an overview of the information contained within your scenario object. Here you can see if it contains the location weather and metadata. As well as the jobs in the pipeline and their arguments.
simple_scenario.viewScenario()
Name : Point Minimum Standoff
Pipeline :
URFAT.job RENCW.job RENCW.params.wind_factor SAAUO.job SAAUO.params.weather_df SAAUO.params.meta.Source SAAUO.params.meta.Location ID SAAUO.params.meta.City SAAUO.params.meta.State SAAUO.params.meta.Country SAAUO.params.meta.Clearsky DHI Units SAAUO.params.meta.Clearsky DNI Units SAAUO.params.meta.Clearsky GHI Units SAAUO.params.meta.Dew Point Units SAAUO.params.meta.DHI Units SAAUO.params.meta.DNI Units SAAUO.params.meta.GHI Units SAAUO.params.meta.Solar Zenith Angle Units SAAUO.params.meta.Temperature Units SAAUO.params.meta.Pressure Units SAAUO.params.meta.Relative Humidity Units SAAUO.params.meta.Precipitable Water Units SAAUO.params.meta.Wind Direction Units SAAUO.params.meta.Wind Speed Units SAAUO.params.meta.Cloud Type -15 SAAUO.params.meta.Cloud Type 0 SAAUO.params.meta.Cloud Type 1 SAAUO.params.meta.Cloud Type 2 SAAUO.params.meta.Cloud Type 3 SAAUO.params.meta.Cloud Type 4 SAAUO.params.meta.Cloud Type 5 SAAUO.params.meta.Cloud Type 6 SAAUO.params.meta.Cloud Type 7 SAAUO.params.meta.Cloud Type 8 SAAUO.params.meta.Cloud Type 9 SAAUO.params.meta.Cloud Type 10 SAAUO.params.meta.Cloud Type 11 SAAUO.params.meta.Cloud Type 12 SAAUO.params.meta.Fill Flag 0 SAAUO.params.meta.Fill Flag 1 SAAUO.params.meta.Fill Flag 2 SAAUO.params.meta.Fill Flag 3 SAAUO.params.meta.Fill Flag 4 SAAUO.params.meta.Fill Flag 5 SAAUO.params.meta.Surface Albedo Units SAAUO.params.meta.Version SAAUO.params.meta.latitude SAAUO.params.meta.longitude SAAUO.params.meta.altitude SAAUO.params.meta.tz
0 <function standoff at 0x000001DBFC6E9080> <function solder_fatigue at 0x000001DBF7A53420> 0.33 <function standoff at 0x000001DBFC6E9080> Year Month Day Hour Minute dni dhi ghi \
1999-01-01 00:30:00-07:00 1999 1 1 0 30 0.0 0.0 0.0
1999-01-01 01:30:00-07:00 1999 1 1 1 30 0.0 0.0 0.0
1999-01-01 02:30:00-07:00 1999 1 1 2 30 0.0 0.0 0.0
1999-01-01 03:30:00-07:00 1999 1 1 3 30 0.0 0.0 0.0
1999-01-01 04:30:00-07:00 1999 1 1 4 30 0.0 0.0 0.0
... ... ... ... ... ... ... ... ...
1999-12-31 19:30:00-07:00 1999 12 31 19 30 0.0 0.0 0.0
1999-12-31 20:30:00-07:00 1999 12 31 20 30 0.0 0.0 0.0
1999-12-31 21:30:00-07:00 1999 12 31 21 30 0.0 0.0 0.0
1999-12-31 22:30:00-07:00 1999 12 31 22 30 0.0 0.0 0.0
1999-12-31 23:30:00-07:00 1999 12 31 23 30 0.0 0.0 0.0
temp_air dew_point wind_speed relative_humidity
1999-01-01 00:30:00-07:00 0.0 -5.0 1.8 79.39
1999-01-01 01:30:00-07:00 0.0 -4.0 1.7 80.84
1999-01-01 02:30:00-07:00 0.0 -4.0 1.5 82.98
1999-01-01 03:30:00-07:00 0.0 -4.0 1.3 85.01
1999-01-01 04:30:00-07:00 0.0 -4.0 1.3 85.81
... ... ... ... ...
1999-12-31 19:30:00-07:00 0.0 -3.0 0.9 83.63
1999-12-31 20:30:00-07:00 0.0 -3.0 1.2 86.82
1999-12-31 21:30:00-07:00 0.0 -4.0 1.6 83.78
1999-12-31 22:30:00-07:00 0.0 -4.0 1.7 81.22
1999-12-31 23:30:00-07:00 0.0 -5.0 1.8 79.43
[8760 rows x 12 columns] NSRDB 145809 - - - w/m2 w/m2 w/m2 c w/m2 w/m2 w/m2 Degree c mbar % cm Degrees m/s N/A Clear Probably Clear Fog Water Super-Cooled Water Mixed Opaque Ice Cirrus Overlapping Overshooting Unknown Dust Smoke N/A Missing Image Low Irradiance Exceeds Clearsky Missing CLoud Properties Rayleigh Violation N/A 3.0.6 39.73 -105.18 1820 -7
Results : Pipeline results :
Pipeline has not been run
'gids : [1060499]'
'test modules :'
scenario weather : Year Month Day Hour Minute temp_air \
2019-01-01 00:30:00-05:00 2019 1 1 0 30 24.1
2019-01-01 01:30:00-05:00 2019 1 1 1 30 24.1
2019-01-01 02:30:00-05:00 2019 1 1 2 30 24.0
2019-01-01 03:30:00-05:00 2019 1 1 3 30 24.0
2019-01-01 04:30:00-05:00 2019 1 1 4 30 24.0
... ... ... ... ... ... ...
2024-12-31 19:30:00-05:00 2024 12 31 19 30 24.4
2024-12-31 20:30:00-05:00 2024 12 31 20 30 24.3
2024-12-31 21:30:00-05:00 2024 12 31 21 30 24.2
2024-12-31 22:30:00-05:00 2024 12 31 22 30 24.2
2024-12-31 23:30:00-05:00 2024 12 31 23 30 24.1
dew_point dhi dni ghi albedo pressure \
2019-01-01 00:30:00-05:00 21.8 0.0 0.0 0.0 0.08 1019.0
2019-01-01 01:30:00-05:00 21.7 0.0 0.0 0.0 0.08 1019.0
2019-01-01 02:30:00-05:00 21.5 0.0 0.0 0.0 0.08 1019.0
2019-01-01 03:30:00-05:00 21.1 0.0 0.0 0.0 0.08 1019.0
2019-01-01 04:30:00-05:00 20.9 0.0 0.0 0.0 0.08 1019.0
... ... ... ... ... ... ...
2024-12-31 19:30:00-05:00 21.7 0.0 0.0 0.0 0.08 1020.0
2024-12-31 20:30:00-05:00 21.8 0.0 0.0 0.0 0.08 1020.0
2024-12-31 21:30:00-05:00 21.8 0.0 0.0 0.0 0.08 1020.0
2024-12-31 22:30:00-05:00 21.9 0.0 0.0 0.0 0.08 1019.0
2024-12-31 23:30:00-05:00 21.9 0.0 0.0 0.0 0.08 1019.0
wind_direction wind_speed relative_humidity
2019-01-01 00:30:00-05:00 128.0 3.6 87.000575
2019-01-01 01:30:00-05:00 126.0 3.5 86.470668
2019-01-01 02:30:00-05:00 124.0 3.4 85.933794
2019-01-01 03:30:00-05:00 122.0 3.4 83.852221
2019-01-01 04:30:00-05:00 121.0 3.5 82.828112
... ... ... ...
2024-12-31 19:30:00-05:00 128.0 4.3 84.929210
2024-12-31 20:30:00-05:00 128.0 4.2 85.963157
2024-12-31 21:30:00-05:00 128.0 4.0 86.480116
2024-12-31 22:30:00-05:00 128.0 3.9 87.009680
2024-12-31 23:30:00-05:00 128.0 3.8 87.533326
[8760 rows x 15 columns]
Display#
The fancier cousin of viewScenario. Only works in a jupyter environemnt as it uses a special ipython backend to render the html and javascript.
It can be called with just the Scenario instance as follows
simple_scenario
or using the display function
display(simple_scenario)
simple_scenario
self.name: Point Minimum Standoff
# noqaself.gids: [1060499]
self.email: user@mail.com
self.api_key: DEMO_KEY
self.results
Noneself.pipeline
► # noqa standoff, #URFAT # noqa
► # noqa solder_fatigue, #RENCW # noqa
► # noqa standoff, #SAAUO # noqa
self.modules
self.weather_data
► # noqa Weather Data
self.meta_data
{'Source': 'NSRDB', 'Location ID': '1060499', 'City': '-', 'State': '-', 'Country': '-', 'Clearsky DHI Units': 'w/m2', 'Clearsky DNI Units': 'w/m2', 'Clearsky GHI Units': 'w/m2', 'Dew Point Units': 'c', 'DHI Units': 'w/m2', 'DNI Units': 'w/m2', 'GHI Units': 'w/m2', 'Solar Zenith Angle Units': 'Degree', 'Temperature Units': 'c', 'Pressure Units': 'mbar', 'Relative Humidity Units': '%', 'Precipitable Water Units': 'cm', 'Wind Direction Units': 'Degrees', 'Wind Speed Units': 'm/s', 'Cloud Type -15': 'N/A', 'Cloud Type 0': 'Clear', 'Cloud Type 1': 'Probably Clear', 'Cloud Type 2': 'Fog', 'Cloud Type 3': 'Water', 'Cloud Type 4': 'Super-Cooled Water', 'Cloud Type 5': 'Mixed', 'Cloud Type 6': 'Opaque Ice', 'Cloud Type 7': 'Cirrus', 'Cloud Type 8': 'Overlapping', 'Cloud Type 9': 'Overshooting', 'Cloud Type 10': 'Unknown', 'Cloud Type 11': 'Dust', 'Cloud Type 12': 'Smoke', 'Fill Flag 0': 'N/A', 'Fill Flag 1': 'Missing Image', 'Fill Flag 2': 'Low Irradiance', 'Fill Flag 3': 'Exceeds Clearsky', 'Fill Flag 4': 'Missing CLoud Properties', 'Fill Flag 5': 'Rayleigh Violation', 'Surface Albedo Units': 'N/A', 'Version': '4.1.2.dev4+g3b38bc8.d20250228', 'latitude': 25.77, 'longitude': -80.18, 'altitude': 7, 'tz': -5, 'wind_height': 2}All attributes can be accessed by the names shown above.
Executing Pipeline Jobs#
To run the pipeline after we have populated it with the desired jobs call the runPipeline method on our scenario instance. This will run all of the jobs we have previously added. The functions that need weather and metadata will grab it from the scenario instance using the correct location added above. The pipeline jobs results will be saved to the scenario instance.
simple_scenario.run()
The array surface_tilt angle was not provided, therefore the latitude of 25.8 was used.
The array azimuth was not provided, therefore an azimuth of 180.0 was used.
The array surface_tilt angle was not provided, therefore the latitude of 25.8 was used.
The array azimuth was not provided, therefore an azimuth of 180.0 was used.
The array surface_tilt angle was not provided, therefore the latitude of 39.7 was used.
The array azimuth was not provided, therefore an azimuth of 180.0 was used.
Results Series#
We will use a series to store the various return values of functions run in our pipeline. These can partially obfuscate the dataframes within them so to access the dataframes, use the function name to access it. To get one of the results we can index it using dictionary syntax. If the job was called 'KSDJQ' do 'simple_scenario.results['KSDJQ'] to directly access the result for that job
print(simple_scenario.results)
print("We can't see out data in here so we need to do another step", end="\n\n")
# to see all available ouputs of results do
print(
f"this is the list of all available frames in results : {simple_scenario.results.index}\n"
)
# loop over all results and display
for keys, results in simple_scenario.results.items():
print(keys)
display(results)
URFAT x T98_0 T98_inf
0 0.951119 ...
RENCW RENCW
0 3.872319
SAAUO x T98_0 T98_inf
0 2.008636 ...
dtype: object
We can't see out data in here so we need to do another step
this is the list of all available frames in results : Index(['URFAT', 'RENCW', 'SAAUO'], dtype='object')
URFAT
| x | T98_0 | T98_inf | |
|---|---|---|---|
| 0 | 0.951119 | 73.149158 | 50.014684 |
RENCW
| RENCW | |
|---|---|
| 0 | 3.872319 |
SAAUO
| x | T98_0 | T98_inf | |
|---|---|---|---|
| 0 | 2.008636 | 77.038644 | 50.561112 |
Cleaning Up the Scenario#
Each scenario object creates a directory named pvd_job_... that contains information about the scenario instance. To remove the directory and all of its information call clean on the scenario. This will permanently delete the directory created by the scenario.
simple_scenario.clean()