Debugging Issues

This page describes debugging techniques for issues encountered during the simulation and analysis process. All of these tools produce output data in both unstructured (.log) and structured form (.csv, .json, etc.). Aggregating data from a batch with thousands of jobs will often require use of UNIX tools (find, grep, awk, etc.) along with bash or Python scripts to process data in stages.

DISCO Return Codes

DISCO processes (snapshot, time-series, upgrades simulations) return these codes for known conditions.

R e t u r n C o d e

D e s c r i p t i o n

C o r r e c t i v e A c t i o n




Generic error


The input configuration is invalid.


An Upgrades simulation exceeded the limit for parallel lines.

If not already done, enable ThermalUpgradeParamsModel.read_external_catalogand provide an external catalog. Or, increase ThermalUpgradeParamsModel.parallel_lines_limit to allow more parallel equipment tobe placed to resolve thermal violations.


An Upgrades simulation exceeded the limit for parallel transformers.

If not already done, enable ThermalUpgradeParamsModel.read_external_catalogand provide an external catalog. Or, increase ThermalUpgradeParamsModel.parallel_transformers_limit to allow more parallel equipment tobe placed to resolve thermal violations.


An OpenDSS element has unexpected properties.

Check the error message and fix the OpenDSS element definitions.


OpenDSS failed to compile a model.

Check the error message and fix the OpenDSS model definitions.


OpenDSS failed to converge.

Check the OpenDSS model. Also, refer to the OpenDSS manual to vary settings for convergence.


PyDSS failed to find a solution in its external controls.


PyDSS external controls exceeded the threshold for error counts.


PyDSS external controls exceeded the max tolerance error threshold.


An Upgrades simulation requires an external catalog for thermal upgrades in order to add or upgrade a component.

Provide an external catalog or disable ThermalUpgradeParamsModel.read_external_catalog.


The Upgrades external catalog does not define a required object.

Ensure the external catalog defines all required objects. Refer to the error message for specific details.


An Upgrades simulation detected an invalid increase in violations.

This could happen in cases when lines or transformers are extremely overloaded. Check and modify OpenDSS model for such instances.

Using JADE

Please refer to JADE documentation -

Note that if you need result status in structured form, such as if you want to find all failed jobs, refer to <output-dir>/results.json.

Using PyDSS

DISCO creates a PyDSS project directory for each simulation job. The directory will have the following contents:


  • store.h5

When running on an HPC the directory contents will always be zipped because huge numbers of directories can be problematic for the shared filesystem.

Here is example content of an extracted job:

$ find output/job-outputs/p1uhs9_1247__p1udt6854__random__9__100/pydss-project


To debug a problem you can unzip the contents. However, this can be problematic if you need to inspect lots of jobs. You may be better off using a tool like Vim that lets you view compressed files in place.

You can also use zipgrep to search specific files within the .zip for patterns. This is extremely helpful if you need to inspect many jobs. This tool uses egrep so you may need to consult help from both locations to customize searches.


All errors get logged in pydss.log. Look there for problems reported by OpenDSS.

Searching for errors

Here is an example of searching for a pattern without unzipping:

$ zipgrep "Convergence error" output/job-outputs/p1uhs9_1247__p1udt6854__random__9__100/pydss_project/ Logs/pydss.log

Here is an example that searches all jobs:

$ for x in `find output/job-outputs -name`; do echo "$x"; zipgrep "Convergence error" $x Logs/pydss.log; done

You will likely want to redirect that command’s output to another file for further processing (or pipe it to another command).

Convergence errors

PyDSS creates a report showing each instance of a convergence error for a PV controller. An example name of this file is pydss_project__control_mode__reports.log. This file contains line-delimited JSON objects. This means that each line is valid JSON but the entire file is not.

Here is an example of one line of the file pretty-printed as JSON:

  "Report": "Convergence",
  "Scenario": "control_mode",
  "Time": 523800,
  "DateTime": "2020-01-07 01:30:00",
  "Controller": "pyCont_PVSystem_small_p1ulv32837_1_2_pv",
  "Controlled element": "PVSystem.small_p1ulv32837_1_2_pv",
  "Error": 0.00241144335086263,
  "Control algorithm": "VVar"

Here are some example commands to convert the file to JSON. This example uses an excellent 3rd-party JSON-parsing tool called jq which you have to install. (On Eagle: conda install -c conda-forge jq). You may have a different method.

$ zipgrep -h Convergence output/job-outputs/p1uhs9_1247__p1udt6854__random__9__100/pydss_project/ Logs/pydss_project__control_mode__reports.log | jq . -s

Note: That command used -h to suppress the filename from the output.

This next command will do do the same for all jobs. Note that it loses the association between job and error. You would need to do some extra work to keep those associations.

$ for x in `find output/job-outputs -name`; do zipgrep -h "Convergence" $x Logs/pydss_project__control_mode__reports.log; done | jq . -s


Be aware of how much CPU and memory will be consumed by these operations. You may want to redirect this output to a temporary text file first.

In both cases you will probably want to redirect the output to a JSON file for further processing.

Running searches in parallel

The DISCO repository has a script that extracts data from with the Python multiprocessing library. You can use this as an example to speed up large searches. Do not run this kind of search on an HPC login node.

Refer to disco/cli/