When to Use It
⚠️ Advanced Node: Use this only when existing nodes cannot accomplish your task. Always check if a specialized node exists first.- Perform complex calculations not available in existing math/utility nodes
- Transform data in highly specific ways not supported by existing data nodes
- Apply advanced statistical analysis, machine learning, or scientific computing
- Parse or manipulate complex data structures requiring custom logic
- Create sophisticated business logic that combines multiple operations
- Process data with specialized libraries (computer vision, audio analysis, NLP)
Inputs
Field | Type | Required | Description |
---|---|---|---|
Variables | Variables | No | Dynamic data from other nodes injected into your code context |
Code | Code | Yes | The Python code to execute (result must be on the last line) |
Packages | List | No | Additional Python packages to install (max 5 packages) |
Outputs
Output | Description |
---|---|
Data | The results value returned by your code execution |
Credit Cost
2 credits per execution.How It Works
Python Code executes your custom Python script in a Jupyter notebook-like environment with access to extensive data science libraries. Like in Jupyter notebooks, only the result of the last line is returned as output. Variables from other nodes are automatically injected into the execution context, so you can reference them directly in your code. Key Features:- Jupyter notebook-style execution (last line becomes the output)
- Access to 30+ pre-installed data science libraries
- Automatic variable injection from workflow data
- JSON-safe output formatting
- Secure execution environment
Available Libraries
The Python Code node includes 30+ pre-installed libraries for data science, web scraping, visualization, and more:Library | Version | Use Case |
---|---|---|
aiohttp | v3.9.3 | Asynchronous HTTP client/server |
beautifulsoup4 | v4.12.3 | Web scraping and HTML/XML parsing |
bokeh | v3.3.4 | Interactive visualization |
gensim | v4.3.2 | Topic modeling and document analysis |
imageio | v2.34.0 | Image I/O operations |
joblib | v1.3.2 | Parallel computing and model persistence |
librosa | v0.10.1 | Audio analysis and music information retrieval |
matplotlib | v3.8.3 | Data visualization and plotting |
nltk | v3.8.1 | Natural language processing |
numpy | v1.26.4 | Numerical computing and arrays |
opencv-python | v4.9.0.80 | Computer vision and image processing |
openpyxl | v3.1.2 | Excel file reading and writing |
pandas | v1.5.3 | Data manipulation and analysis |
plotly | v5.19.0 | Interactive web-based visualizations |
pytest | v8.1.0 | Testing framework |
python-docx | v1.1.0 | Microsoft Word document manipulation |
pytz | v2024.1 | Timezone handling |
requests | v2.26.0 | HTTP requests and API calls |
scikit-image | v0.22.0 | Image processing algorithms |
scikit-learn | v1.4.1.post1 | Machine learning library |
scipy | v1.12.0 | Scientific computing |
seaborn | v0.13.2 | Statistical data visualization |
soundfile | v0.12.1 | Audio file I/O |
spacy | v3.7.4 | Advanced natural language processing |
sympy | v1.12 | Symbolic mathematics |
textblob | v0.18.0 | Simple text processing |
tornado | v6.4 | Web framework and networking |
urllib3 | v1.26.7 | HTTP client library |
xarray | v2024.2.0 | Multi-dimensional arrays and datasets |
xlrd | v2.0.1 | Excel file reading |
Installing Additional Packages
Need a package not included in the pre-installed libraries? You can install up to 5 additional Python packages using the Packages field. Guidelines:- Maximum 5 packages per execution
- Common data science packages (pandas, numpy, requests, etc.) are already available - don’t reinstall them
- Use specific versions when needed:
"package==1.2.3"
- Packages are installed at runtime before code execution, which may slightly increase execution time
- Packages are temporary and only available for the current execution
Code Requirements
Jupyter-Style Execution:- Works like a Jupyter notebook - only the last line’s result is returned
- No need for explicit
return
statements - The final expression becomes the output automatically
- Place your result as the final line of code
- Must be JSON-safe types: dict, list, str, int, float, bool, or None
- No
print()
statements for output - use the last line instead
- pandas DataFrames (convert to dict/list first:
df.to_dict('records')
) - numpy arrays (use
.tolist()
) - Custom objects or functions
- Complex nested objects
Variables Usage
Variables from other nodes are automatically available in your code context: Setup Variables:Examples
Example 1: Calculate Campaign Performance
Variables:Example 2: Data Cleaning and Transformation
Variables:FAQ
What happens if my code has an error?
What happens if my code has an error?
The node will fail and display the Python error message. Check your syntax, variable names, and ensure all required libraries are available.
Can I install additional packages?
Can I install additional packages?
Yes! Use the Packages field to install up to 5 additional Python packages. Specify package names as a list (e.g.,
["countryinfo==0.1.2"]
). Common packages like pandas, numpy, and requests are already pre-installed, so don’t reinstall them.How do I return multiple values?
How do I return multiple values?
Combine them into a single dictionary:
{"value1": result1, "value2": result2}
. You can also return a list of dictionaries for multiple records.Why can't I return a pandas DataFrame directly?
Why can't I return a pandas DataFrame directly?
DataFrames aren’t JSON-serializable. Convert them first:
df.to_dict('records')
for row-based data or df.to_dict()
for column-based data.How do I debug my code?
How do I debug my code?
Use simple return statements to check intermediate values. For example, return
{"debug": variable_name}
to see what data you’re working with.Can I make HTTP requests in my code?
Can I make HTTP requests in my code?
Yes, use the
requests
library to make API calls: import requests; response = requests.get('https://api.example.com')
.What if I need to process very large datasets?
What if I need to process very large datasets?
Consider using pandas for efficient data operations, or break processing into smaller chunks. Be mindful of memory usage and execution time limits.
Can I save files or write to disk?
Can I save files or write to disk?
No, the execution environment is read-only. All data must be returned through the result value for use in subsequent workflow steps.