Insert Rows
Insert data from your workflows into Google BigQuery tables for large-scale data warehousing and analytics.
Insert Rows sends data from your workflows directly into Google BigQuery tables. Essential for building data warehouses, storing large datasets, and enabling advanced analytics on your marketing data.
When to Use It
- Store advertising performance data for long-term analysis
- Build a unified data warehouse from multiple marketing platforms
Inputs
Field | Type | Required | Description |
---|---|---|---|
Project | Select | Yes | Your Google BigQuery project |
Dataset | Select | Yes | Dataset containing your target table |
Table | Select | Yes | Target table to insert data into |
Data Mapping | Select | Yes | Choose “Use All Data” or “Map Specific Columns” |
Data | Data | Yes | Data source from previous workflow steps |
Column Mapping | Mapper | Yes* | Map data fields to table columns (*For Map Specific Columns only) |
Skip Invalid Rows | Switch | Yes | Skip rows that fail validation (default: enabled) |
Outputs
Output | Description |
---|---|
Insert Results | Details about the insertion operation including success count |
Credit Cost
1 credit per operation (regardless of number of rows inserted).
Data Mapping Options
Use All Data:
- Automatically maps all data fields to matching table columns
- Best when your data structure matches your BigQuery table
- Faster setup for standard data workflows
Map Specific Columns:
- Manually map each data field to specific table columns
- Use when data structure doesn’t match table schema
- Allows field renaming and selective data insertion
Real-World Examples
Daily Performance Archive:
Lead Generation Data Pipeline:
Best Practices
Schema Management:
- Ensure your BigQuery table schema matches your data structure
- Use consistent data types across all data sources
- Plan your table schema before building workflows
Data Quality:
- Clean and validate data before insertion
- Use “Skip Invalid Rows” to handle data quality issues
- Monitor insertion results for failed rows
Performance Optimization:
- Batch multiple data sources when possible
- Use partitioned tables for time-series data
- Consider clustering for frequently queried columns
Tips
Table Preparation:
- Create your BigQuery tables and schema first
- Use appropriate data types for your marketing data
- Consider partitioning by date for performance
Data Consistency:
- Standardize field names across all data sources
- Use Rename Fields before insertion to match schema
- Maintain consistent date/time formats
Error Handling:
- Enable “Skip Invalid Rows” for production workflows
- Monitor insertion results for data quality issues
- Have fallback plans for schema mismatches
FAQ
What happens if my data doesn't match the table schema?
What happens if my data doesn't match the table schema?
If “Skip Invalid Rows” is enabled (default), rows with schema mismatches will be skipped and the operation continues. If disabled, the entire operation fails on the first invalid row.
Can I insert data into multiple tables at once?
Can I insert data into multiple tables at once?
No, each Insert Rows node targets one specific table. Use multiple Insert Rows nodes to write to different tables, or combine data first then split it.
How do I handle different data structures from various sources?
How do I handle different data structures from various sources?
Use Rename Fields and Remove Fields nodes before insertion to standardize your data structure. Map fields appropriately in the Column Mapping section.
What's the difference between the two data mapping options?
What's the difference between the two data mapping options?
“Use All Data” automatically matches field names to column names. “Map Specific Columns” lets you manually control which data goes to which columns, useful when names don’t match exactly.
Can I append to existing data or does it overwrite?
Can I append to existing data or does it overwrite?
BigQuery Insert Rows always appends new data to your table. It never overwrites existing data. Use BigQuery’s built-in features for data updates or deletions.
How do I handle large datasets efficiently?
How do I handle large datasets efficiently?
BigQuery is designed for large datasets. Consider partitioning your tables by date and use appropriate clustering for your query patterns. The credit cost is the same regardless of data size.
What if I need to transform data before insertion?
What if I need to transform data before insertion?
Use other workflow nodes like Rename Fields, Remove Fields, or AI Analyze Data to transform your data before sending it to BigQuery.