Intermediate~20 min setupCommunication & CRMVerified April 2026
Slack logo
Copper logo

How to Send Daily Copper Pipeline Updates to Slack with n8n

A scheduled n8n workflow that pulls deal data from Copper every morning and posts a formatted pipeline summary — with deal counts, forecast totals, and recent activity — directly to a Slack channel.

Steps and UI details are based on platform versions at time of writing — check each platform for the latest interface.

Best for

Sales teams of 5–30 people who want automated morning pipeline digests in Slack without paying for a BI tool or building reports manually.

Not ideal for

Teams needing real-time deal alerts triggered the moment a stage changes — use a webhook-based workflow for that instead.

Sync type

scheduled

Use case type

reporting

Real-World Example

💡

A 12-person B2B SaaS team runs this every weekday at 8:00 AM to post a pipeline digest to #sales-leadership. Before the workflow, the sales manager pulled numbers from Copper manually each morning — 15 minutes of copy-pasting before every standup. Now the summary is waiting in Slack before anyone opens their laptop, showing open deals by stage, total forecast value, and the three deals with activity in the last 24 hours.

What Will This Cost?

Drag the slider to your expected monthly volume.

/mo
505005K50K

Each platform counts differently — Zapier: 1 task per trigger. Make: 1 operation per module per record. n8n: 1 execution per run.

Prices shown for annual billing. Based on published pricing as of April 2026.

Estimated ROI

1000

min saved/mo

$583

labor value/mo

Free

no platform cost

Based on ~2 min manual effort per operation at $35/hr fully loaded labor cost.

Implementation

Skip the setup

Import this workflow directly into n8n

Copy the pre-built n8n blueprint and paste it straight into n8n. All modules, filters, and field mappings are already configured — you just need to connect your accounts.

Before You Start

Make sure you have everything ready.

Copper API key and the email address associated with your Copper account — both are required for every API request
A Slack bot token (xoxb-) with the 'chat:write' scope, or OAuth2 credentials from a Slack app you control
The bot must be added to the target Slack channel — in Slack, open the channel, click the channel name, go to Integrations, and add the bot
Access to your n8n instance (self-hosted or n8n Cloud) with permission to create and activate workflows

Optional

Your Copper pipeline ID if you want to filter results to a single pipeline — find it in Copper under Settings > Pipelines > click a pipeline and copy the ID from the URL

Field Mapping

Map these fields between your apps.

FieldAPI Name
Required
Deal Namename
Monetary Valuemonetary_value
Close Dateclose_date
Pipeline Stage IDpipeline_stage_id
Statusstatus
Stage Namename
4 optional fields▸ show
Pipeline IDpipeline_id
Assignee IDassignee_id
Win Probabilitywin_probability
Last Activity Datedate_last_contacted

Step-by-Step Setup

1

n8n Dashboard > New Workflow

Create a new n8n workflow

Log into your n8n instance and click the orange 'New Workflow' button in the top right of the Workflows dashboard. Name it something descriptive like 'Daily Copper Pipeline → Slack'. You'll build the workflow left to right: trigger first, then Copper fetch, then data transformation, then Slack post. Keep this tab open — you'll be switching between n8n and both app credential screens.

  1. 1Click 'New Workflow' in the top right of the n8n dashboard
  2. 2Click the workflow title at the top and rename it to 'Daily Copper Pipeline → Slack'
  3. 3Click the canvas to deselect and confirm the name saved
What you should see: You should see a blank canvas with the workflow name displayed at the top of the editor.
2

Canvas > + > Schedule Trigger

Add the Schedule trigger

Click the '+' node button in the center of the canvas to open the node picker. Search for 'Schedule Trigger' and select it. Set the trigger to run on a Cron schedule. For a daily 8:00 AM weekday digest, enter the cron expression '0 8 * * 1-5'. This fires Monday through Friday at 8:00 AM in whatever timezone your n8n server runs in — check your server timezone before setting this.

  1. 1Click the '+' button on the blank canvas
  2. 2Type 'Schedule' in the search bar and select 'Schedule Trigger'
  3. 3Set 'Trigger Interval' to 'Custom (Cron)'
  4. 4Enter '0 8 * * 1-5' in the cron expression field
  5. 5Click 'Test Step' to confirm the trigger node shows a valid next-run time
What you should see: The Schedule Trigger node shows 'Next execution' with a timestamp for the next upcoming weekday at 8:00 AM.
Common mistake — n8n uses the server's local timezone for cron triggers. If your server runs UTC and your team is in EST, '0 8 * * 1-5' fires at 3:00 AM EST. Adjust the hour accordingly or set the GENERIC_TIMEZONE environment variable on your n8n instance.
n8n
+
click +
search apps
Slack
SL
Slack
Add the Schedule trigger
Slack
SL
module added
3

Copper App > Settings > Integrations > API Keys

Connect Copper credentials

Add an HTTP Request node (Copper does not have a native n8n node, so you'll call the Copper REST API directly). Click '+' after the Schedule Trigger and search for 'HTTP Request'. In the node settings, you'll need to configure Header Auth with your Copper API key and your registered email. In Copper, go to Settings > Integrations > API Keys to generate a key if you don't have one.

  1. 1In Copper, navigate to Settings > Integrations > API Keys
  2. 2Click 'Generate API Key' and copy the key
  3. 3Back in n8n, add an HTTP Request node after the Schedule Trigger
  4. 4In the HTTP Request node, set Authentication to 'Header Auth'
  5. 5Add two headers: 'X-PW-AccessToken' with your API key, and 'X-PW-UserEmail' with your Copper account email
  6. 6Add a third header: 'X-PW-Application' set to 'developer_api'
What you should see: The HTTP Request node shows three headers configured and Authentication set to 'Header Auth'. No red validation errors on the node.
Common mistake — Copper requires all three headers on every request — X-PW-AccessToken, X-PW-UserEmail, and X-PW-Application. Missing any one of them returns a 401 even if your API key is correct.
4

HTTP Request Node > Parameters

Fetch open opportunities from Copper

Configure the HTTP Request node to POST to Copper's opportunities search endpoint. Use the URL 'https://api.copper.com/developer_api/v1/opportunities/search'. Set the method to POST and the body to JSON. Send a search payload that filters for open opportunities — set 'assignee_ids' to empty to get all reps, and set 'status' to 'Open'. You can also filter by pipeline ID if you have multiple pipelines and only want one.

  1. 1Set Method to 'POST'
  2. 2Set URL to 'https://api.copper.com/developer_api/v1/opportunities/search'
  3. 3Set Body Content Type to 'JSON'
  4. 4Paste the following JSON body: {"page_size": 200, "status": "Open", "sort_by": "close_date", "sort_direction": "asc"}
  5. 5Click 'Test Step' to execute the request
What you should see: The output panel shows a JSON array of opportunity objects, each with fields like 'name', 'monetary_value', 'close_date', 'pipeline_stage_id', and 'assignee_id'.
Common mistake — Copper's search endpoint returns a maximum of 200 records per page. If you have more than 200 open deals, you need to paginate using the 'page_number' parameter. Most teams under 30 reps won't hit this, but check your deal count first.
5

Canvas > + after Opportunities Node > HTTP Request

Fetch pipeline stage names

Copper returns 'pipeline_stage_id' as a numeric ID, not a human-readable name. Add a second HTTP Request node to fetch your pipeline stages so you can map IDs to names in the next step. Call GET 'https://api.copper.com/developer_api/v1/pipeline_stages' with the same three auth headers. This returns a flat array of all stages across all pipelines — you'll filter by pipeline ID in the code node.

  1. 1Add a new HTTP Request node after the opportunities fetch node
  2. 2Set Method to 'GET'
  3. 3Set URL to 'https://api.copper.com/developer_api/v1/pipeline_stages'
  4. 4Add the same three Copper auth headers as Step 3
  5. 5Click 'Test Step' to confirm stage data returns
What you should see: The output shows an array of stage objects, each with 'id', 'name', 'pipeline_id', and 'win_probability' fields.
6

Canvas > + > Code

Transform data with a Code node

Add a Code node after both HTTP Request nodes. This is where you build the pipeline summary: group deals by stage, sum monetary values, count deals per stage, flag deals closing within 7 days, and resolve stage IDs to stage names using the stages array from Step 5. Connect both HTTP Request nodes into the Code node using n8n's 'Merge' approach — or pass the stages data via a Set node into a variable. The Code node outputs a single structured object ready to format into a Slack message.

  1. 1Add a Code node after the pipeline stages HTTP Request node
  2. 2Set the Code node language to 'JavaScript'
  3. 3Paste the transformation code from the Pro Tip section below
  4. 4Click 'Test Step' to verify the output object contains stage summaries and totals
What you should see: The Code node output shows a single item with fields like 'totalDeals', 'totalForecast', 'stageSummary' (array), and 'closingSoon' (array of deals closing within 7 days).
Common mistake — The Code node in n8n runs Node.js. You cannot use browser globals like 'fetch' inside it. All data must come in through the node's input — don't try to make API calls inside the Code node.

Paste this into the Code node (Step 6). It expects two inputs: index 0 is the opportunities array from Copper, index 1 is the pipeline stages array. It groups deals by stage, sums values, flags deals closing within 7 days, calculates a weighted forecast using win_probability, and returns a single structured object ready for the Slack Block Kit message.

JavaScript — Code Node// Code node — expects:
▸ Show code
// Code node — expects:
// $input.all()[0] = Copper opportunities array
// $input.all()[1] = Copper pipeline stages array

... expand to see full code

// Code node — expects:
// $input.all()[0] = Copper opportunities array
// $input.all()[1] = Copper pipeline stages array

const opportunitiesInput = $input.all()[0].json;
const stagesInput = $input.all()[1].json;

// Build a stage ID → stage object map
const stageMap = {};
for (const stage of stagesInput) {
  stageMap[stage.id] = stage;
}

// Group deals by stage
const stageGroups = {};
let totalForecast = 0;
let weightedForecast = 0;
const now = Math.floor(Date.now() / 1000);
const sevenDays = 7 * 24 * 60 * 60;
const closingSoon = [];

for (const deal of opportunitiesInput) {
  const stageId = deal.pipeline_stage_id;
  const stage = stageMap[stageId] || { name: 'Unknown Stage', win_probability: 0 };
  const value = deal.monetary_value || 0;

  if (!stageGroups[stageId]) {
    stageGroups[stageId] = {
      stageName: stage.name,
      dealCount: 0,
      stageTotal: 0,
      winProbability: stage.win_probability || 0
    };
  }

  stageGroups[stageId].dealCount += 1;
  stageGroups[stageId].stageTotal += value;
  totalForecast += value;
  weightedForecast += value * ((stage.win_probability || 0) / 100);

  // Flag deals closing within 7 days
  if (deal.close_date && deal.close_date - now <= sevenDays && deal.close_date >= now) {
    closingSoon.push({
      name: deal.name,
      closeDate: new Date(deal.close_date * 1000).toLocaleDateString('en-US', { month: 'long', day: 'numeric', year: 'numeric' }),
      value: `$${value.toLocaleString()}`,
      stage: stage.name,
      assigneeId: deal.assignee_id
    });
  }
}

// Sort closing-soon by close date ascending
closingSoon.sort((a, b) => a.closeDate.localeCompare(b.closeDate));

// Build stage summary array sorted by total value descending
const stageSummary = Object.values(stageGroups).sort((a, b) => b.stageTotal - a.stageTotal);

const reportDate = new Date().toLocaleDateString('en-US', { weekday: 'long', month: 'long', day: 'numeric', year: 'numeric' });

return [{
  json: {
    reportDate,
    totalDeals: opportunitiesInput.length,
    totalForecast: `$${Math.round(totalForecast).toLocaleString()}`,
    weightedForecast: `$${Math.round(weightedForecast).toLocaleString()}`,
    stageSummary: stageSummary.map(s => ({
      stageName: s.stageName,
      dealCount: s.dealCount,
      stageTotal: `$${Math.round(s.stageTotal).toLocaleString()}`
    })),
    closingSoon,
    hasClosingSoon: closingSoon.length > 0
  }
}];
7

Canvas > + > Slack > Credentials > Create New

Connect Slack credentials

Add a Slack node after the Code node. Click '+' and search for 'Slack'. Select the 'Send a Message' operation. Click 'Create New Credential' and choose 'OAuth2'. n8n will walk you through authorizing a Slack app — you'll need to either use an existing Slack app with a bot token or create one at api.slack.com/apps. The bot needs the 'chat:write' scope at minimum. If you want the bot to post to private channels, add it to those channels first.

  1. 1Add a Slack node after the Code node
  2. 2Set Operation to 'Send a Message'
  3. 3Click 'Create New Credential' and select 'Slack OAuth2 API'
  4. 4Click 'Connect my account' and authorize through the Slack OAuth screen
  5. 5Confirm the credential shows a green 'Connected' status
What you should see: The Slack node shows a green 'Connected' badge and the credential name appears in the dropdown.
Common mistake — If you use a Slack Bot Token (xoxb-) instead of OAuth2, paste it under 'Access Token' in n8n's Slack credential screen. Bot tokens do not expire, which makes them more reliable for scheduled workflows than OAuth tokens that require refresh.
8

Slack Node > Parameters > Message Type

Configure the Slack message

In the Slack node, set the 'Channel' field to the channel where the digest should post (e.g. '#sales-pipeline' or 'C0123456789' for a channel ID — channel IDs are more reliable than names). Set 'Message Type' to 'Blocks' to use Slack Block Kit for formatted output. You'll build a message with a header section showing the date, a fields section with total deal count and forecast value, and a list of deals closing within 7 days.

  1. 1Set 'Channel' to your target Slack channel (e.g. '#sales-pipeline')
  2. 2Set 'Message Type' to 'Blocks'
  3. 3In the Blocks field, click the expression editor (toggle icon) and paste your Block Kit JSON referencing the Code node output
  4. 4Include a header block with today's date using the expression '{{ $now.toFormat("MMMM d, yyyy") }}'
  5. 5Add a divider block and a section block for each stage summary
What you should see: The Slack node preview panel shows the formatted block structure. When you click 'Test Step', a real message posts to the channel.
Common mistake — Slack Block Kit has a 50-block limit per message. If your pipeline has 20+ stages each with a section block, you'll hit this. Consolidate stages with fewer than 3 deals into an 'Other' bucket in the Code node to keep block count manageable.
9

HTTP Request Node > Right-click > Add Error Output

Add error handling with a fallback node

Scheduled workflows that fail silently are worse than no automation. Right-click the HTTP Request node for opportunities and select 'Add Error Output'. Connect that error output to a second Slack node configured to post to a #alerts or #dev-ops channel when the workflow fails. Set a static message like 'Daily pipeline workflow failed — check n8n execution log'. Do the same for the Slack posting node.

  1. 1Right-click the Copper opportunities HTTP Request node
  2. 2Select 'Add Error Output'
  3. 3Add a new Slack node connected to the error output
  4. 4Set the channel to a monitoring channel (e.g. '#workflow-alerts')
  5. 5Set the message to 'Daily Copper pipeline workflow failed. Check n8n execution log for details.'
  6. 6Repeat for the main Slack posting node
What you should see: Both the Copper fetch node and the Slack post node show a red error output connector. Each connects to a fallback Slack alert node.
10

n8n Canvas > Test Workflow (top toolbar)

Test the full workflow end-to-end

Click 'Test Workflow' (not 'Test Step') in the top toolbar to run every node in sequence. Watch the execution panel on the right — green nodes passed, red nodes failed. Click any node to inspect its input and output data. Confirm the Slack message arrives in the target channel with correct deal counts, stage names, and forecast totals. Check that the closing-soon list shows only deals within 7 days.

  1. 1Click 'Test Workflow' in the top toolbar
  2. 2Watch each node turn green in sequence
  3. 3Click the Code node to verify 'totalForecast' is a number (not a string)
  4. 4Open Slack and confirm the message posted with correct data
  5. 5Compare one deal's value in the Slack message against the same deal in Copper
What you should see: All nodes show green status. The Slack channel receives a formatted message with the correct pipeline summary. Deal counts and forecast totals match what you see in Copper's pipeline view.
Common mistake — Copper's 'monetary_value' field returns values in cents for some account configurations — if your totals look 100x too large, divide all monetary_value fields by 100 in the Code node.
n8n
▶ Run once
executed
Slack
Copper
Copper
🔔 notification
received
11

n8n Canvas > Active toggle (top right)

Activate the workflow

Once the test run looks correct, click the toggle in the top right of the n8n editor to switch the workflow from 'Inactive' to 'Active'. n8n will now run this automatically at every scheduled time. Go to the 'Executions' tab to monitor runs. After the first live run the next morning, open that execution and verify the output matches the test run. Set a reminder to check executions manually for the first 3 days.

  1. 1Click the 'Inactive' toggle in the top right to flip it to 'Active'
  2. 2Confirm the toggle turns green and shows 'Active'
  3. 3Navigate to the Executions tab and verify the workflow appears in the upcoming schedule
  4. 4Set a calendar reminder to check the Executions tab the following morning
What you should see: The workflow status shows 'Active' and the Executions tab shows a scheduled next-run time matching your cron expression.

Scaling Beyond 200+ open deals in Copper+ Records

If your volume exceeds 200+ open deals in Copper records, apply these adjustments.

1

Paginate the Copper search request

Copper's search endpoint caps at 200 results per page. Add a loop in n8n using the 'Loop Over Items' node combined with a counter. On each iteration, increment the 'page_number' field in the request body and merge results until you receive fewer than 200 records in a response.

2

Batch stage name lookups

The /pipeline_stages endpoint returns all stages in one call — no pagination needed. Fetch it once at the start and pass it through to the Code node. Do not call this endpoint per deal or you'll hit Copper's rate limit of 600 requests per minute.

3

Respect Copper's rate limit

Copper allows 600 API requests per minute. For large pipelines requiring 3-4 paginated calls plus a stages call plus a users call, you're nowhere near the limit. But if you extend this workflow to fetch activity logs per deal, those individual calls stack up fast — batch fetch activities by deal ID array instead.

4

Cap the Slack message to top 20 deals

Slack Block Kit has a 50-block limit. With 200+ deals, you can't list every deal. In the Code node, sort closingSoon by close date and slice to the first 5-10. For the stage summary, show only stages with 2+ deals and group the rest as 'Other (N deals, $X total)' to keep block count under 30.

Going live

Production Checklist

Before you turn this on for real, confirm each item.

Troubleshooting

Common errors and how to fix them.

Frequently Asked Questions

Common questions about this workflow.

Analysis

VerdictWhy n8n for this workflow

Use n8n for this if you want full control over the message format, need to resolve Copper's numeric IDs to human-readable names, or already run n8n self-hosted and want to avoid paying for another SaaS tool. The code node is genuinely useful here — Copper returns raw data that needs three transformations before it's Slack-ready: stage ID lookup, monetary value formatting, and close-date filtering. n8n handles all of that in one JavaScript block. If your team has zero technical staff and needs this live in 20 minutes, use Zapier instead — you'll give up the formatting control but skip the code entirely.

Cost

The cost math is straightforward. This workflow runs once per weekday morning, which is 22 executions per month. Each run uses 4-5 nodes: Schedule trigger, two HTTP Request nodes, one Code node, one Slack node. On n8n Cloud's Starter plan ($20/month), you get 2,500 executions per month — this workflow uses under 150. Self-hosted n8n costs nothing except server time. Zapier would run this as a 4-step Zap on a schedule, which costs nothing on the free tier but lacks the code transformation — you'd need Formatter steps, pushing you to a paid plan at $19.99/month. Make's free tier covers 1,000 operations per month; this workflow uses roughly 110 operations per month, so it fits free on Make too.

Tradeoffs

Make is genuinely better at one thing here: its HTTP module has built-in pagination handling with an easier UI than n8n's loop setup. If Copper pagination is your main concern and you don't want to write code, Make's iterator + aggregator pattern handles it without a single line of JavaScript. Zapier's Scheduled Zap trigger is simpler to configure than n8n's cron syntax — no timezone math required, just pick a time from a dropdown. Power Automate has a native Slack connector but no Copper connector, so you'd be writing custom HTTP actions anyway, negating the UI advantage. Pipedream has a clean Copper integration and good scheduled trigger support, but its free tier limits executions to 333 per day, which is fine for this use case — the tradeoff is a less flexible code environment than n8n's full Node.js runtime. n8n wins when you're already self-hosting it, need to customize the Slack Block Kit output without fighting a visual builder, or want to extend this later into a more complex workflow with additional branches.

Three things you'll hit after setup. First: Copper's close_date field is a Unix timestamp in seconds, but JavaScript's Date constructor expects milliseconds — multiply by 1000 before parsing or your close dates will show as 1970. Second: if a Copper user deletes a pipeline stage without moving the deals off it first, those deals return a pipeline_stage_id that doesn't exist in your stages lookup. The Code node will silently label them 'Unknown Stage' unless you add explicit handling. Third: Slack's Block Kit JSON breaks with no visible error if you include a trailing comma or use a text field that exceeds 3,000 characters — Copper deal names with long custom fields can push section blocks over this limit. Truncate any dynamic text fields to 200 characters in the Code node before they hit the Slack payload.

Ideas for what to build next

  • Add per-rep deal breakdownsExtend the Code node to group deals by assignee_id, then call the Copper /users endpoint to resolve IDs to rep names. Post a secondary Slack message or thread reply showing each rep's deal count and forecast total.
  • Add a weekly win/loss summaryCreate a second scheduled workflow that runs every Monday morning, fetching Copper opportunities with status 'Won' or 'Lost' and close_date within the last 7 days. Post a separate Slack message showing last week's closed deals and total revenue booked.
  • Route deal alerts to rep-specific Slack DMsAdd a branch after the Code node that loops through the closingSoon array and sends a direct Slack message to each rep's Slack user ID for deals they own closing within 3 days — a more targeted alternative to the channel-wide digest.

Related guides

Was this guide helpful?
Slack + Copper overviewn8n profile →