Insert Rows sends data from your workflows directly into Google BigQuery tables. Essential for building data warehouses, storing large datasets, and enabling advanced analytics on your marketing data.


When to Use It

  • Store advertising performance data for long-term analysis
  • Build a unified data warehouse from multiple marketing platforms

Inputs

FieldTypeRequiredDescription
ProjectSelectYesYour Google BigQuery project
DatasetSelectYesDataset containing your target table
TableSelectYesTarget table to insert data into
Data MappingSelectYesChoose “Use All Data” or “Map Specific Columns”
DataDataYesData source from previous workflow steps
Column MappingMapperYes*Map data fields to table columns (*For Map Specific Columns only)
Skip Invalid RowsSwitchYesSkip rows that fail validation (default: enabled)

Outputs

OutputDescription
Insert ResultsDetails about the insertion operation including success count

Credit Cost

1 credit per operation (regardless of number of rows inserted).


Data Mapping Options

Use All Data:

  • Automatically maps all data fields to matching table columns
  • Best when your data structure matches your BigQuery table
  • Faster setup for standard data workflows

Map Specific Columns:

  • Manually map each data field to specific table columns
  • Use when data structure doesn’t match table schema
  • Allows field renaming and selective data insertion

Real-World Examples

Daily Performance Archive:

Google Ads Get Report (yesterday's data) → Insert Rows (daily_performance)
Schedule: Run daily at 6 AM
"Automatically archive daily performance data for trend analysis"

Lead Generation Data Pipeline:

LinkedIn Get Profile Details → Rename Fields → Insert Rows (leads_table)
"Store prospect information in BigQuery for CRM integration"

Best Practices

Schema Management:

  • Ensure your BigQuery table schema matches your data structure
  • Use consistent data types across all data sources
  • Plan your table schema before building workflows

Data Quality:

  • Clean and validate data before insertion
  • Use “Skip Invalid Rows” to handle data quality issues
  • Monitor insertion results for failed rows

Performance Optimization:

  • Batch multiple data sources when possible
  • Use partitioned tables for time-series data
  • Consider clustering for frequently queried columns

Tips

Table Preparation:

  • Create your BigQuery tables and schema first
  • Use appropriate data types for your marketing data
  • Consider partitioning by date for performance

Data Consistency:

  • Standardize field names across all data sources
  • Use Rename Fields before insertion to match schema
  • Maintain consistent date/time formats

Error Handling:

  • Enable “Skip Invalid Rows” for production workflows
  • Monitor insertion results for data quality issues
  • Have fallback plans for schema mismatches

FAQ