Intermediate~15 min setupCommunication & CRMVerified April 2026
Slack logo
Copper logo

How to Send Daily Copper Pipeline Updates to Slack with Pipedream

A scheduled Pipedream workflow queries Copper for open deals, activity, and forecast data each morning, then posts a formatted summary to a Slack channel — no manual reporting needed.

Steps and UI details are based on platform versions at time of writing — check each platform for the latest interface.

Best for

Sales teams of 5–30 people who need daily deal visibility in Slack without building a BI dashboard or running manual CRM exports.

Not ideal for

Teams that need real-time deal alerts on every stage change — use a webhook-triggered workflow instead.

Sync type

scheduled

Use case type

reporting

Real-World Example

💡

A 12-person SaaS sales team uses this to post a 7am pipeline digest to #sales-leadership every weekday. Before automation, the sales manager pulled a Copper report manually each morning, copied numbers into a Slack message, and sent it — a 15-minute task that got skipped whenever she was in early calls. Now the digest posts automatically: total open pipeline value, deals closing this week, and which reps logged zero activity yesterday.

What Will This Cost?

Drag the slider to your expected monthly volume.

/mo
505005K50K

Each platform counts differently — Zapier: 1 task per trigger. Make: 1 operation per module per record. n8n: 1 execution per run.

Prices shown for annual billing. Based on published pricing as of April 2026.

Estimated ROI

1000

min saved/mo

$583

labor value/mo

Free

no platform cost

Based on ~2 min manual effort per operation at $35/hr fully loaded labor cost.

Implementation

Skip the setup

Import this workflow directly into Pipedream

Copy the pre-built Pipedream blueprint and paste it straight into Pipedream. All modules, filters, and field mappings are already configured — you just need to connect your accounts.

Before You Start

Make sure you have everything ready.

Copper account with API access enabled — go to Copper Settings > API Keys and generate a key. You will also need the email address associated with your Copper account since the API requires both as headers.
Copper user must have at least read access to Opportunities and Activities — admin or standard user roles both qualify; restricted roles may not return all deal data.
Slack account with permission to install apps and create bot tokens — you need the chat:write scope to post messages. If your workspace enforces app approval, get admin approval before starting.
Pipedream account (free tier works for up to 10,000 credits/month) — sign up at pipedream.com and confirm your email before creating workflows.
Target Slack channel already created (e.g. #sales-leadership) — the workflow will fail if the channel does not exist when the message posts.

Field Mapping

Map these fields between your apps.

FieldAPI Name
Required
Deal Namename
Pipeline Valuemonetary_value
Close Dateclose_date
Deal Stagepipeline_stage_id
Assignee IDassignee_id
Activity Dateactivity_date
Deal Statusstatus
Slack Channel
2 optional fields▸ show
Activity Typetype
Win Probabilitywin_probability

Step-by-Step Setup

1

pipedream.com > Workflows > New Workflow

Create a new Pipedream workflow

Go to pipedream.com and click 'New Workflow' in the top-right corner. You'll land on the workflow canvas, which shows a trigger slot at the top and empty step slots below. Give the workflow a name like 'Daily Copper Pipeline Digest to Slack' — this label shows up in logs and makes debugging faster. Save the name before adding the trigger.

  1. 1Click 'New Workflow' in the top-right corner of the Pipedream dashboard
  2. 2Click the workflow title field at the top and type 'Daily Copper Pipeline Digest to Slack'
  3. 3Press Enter to save the name
What you should see: You should see a blank workflow canvas with an empty trigger slot labeled 'Add Trigger' and your workflow name displayed at the top.
2

Workflow Canvas > Add Trigger > Schedule

Set a daily schedule trigger

Click the 'Add Trigger' slot and search for 'Schedule'. Select the 'Schedule' source from the Pipedream core sources list — not any app-specific trigger. Choose 'Every day' and set the time to 07:00 in your local timezone. Pipedream runs schedule triggers in UTC by default, so if your team is in EST (UTC-5), enter 12:00 UTC to hit 7am EST. Double-check this — a wrong timezone offset means your digest posts mid-afternoon.

  1. 1Click the 'Add Trigger' slot on the canvas
  2. 2Type 'Schedule' in the search box
  3. 3Select 'Schedule' from Pipedream's core sources
  4. 4Choose 'Every day' from the interval dropdown
  5. 5Set the time field to your target UTC time (e.g. 12:00 for 7am EST)
  6. 6Click 'Save and continue'
What you should see: The trigger slot shows 'Schedule — Every day at 12:00 UTC' and a green 'Active' badge appears once you deploy.
Common mistake — Pipedream schedule times are always UTC. There is no timezone selector in the UI — convert manually before saving or your digest will post at the wrong time.
Pipedream
+
click +
search apps
Slack
SL
Slack
Set a daily schedule trigger
Slack
SL
module added
3

Workflow Canvas > + Add Step > Run Node.js code

Add a Node.js step to query Copper for open opportunities

Click '+ Add Step' below the trigger and choose 'Run Node.js code'. This step calls the Copper REST API directly — Pipedream has no native Copper app with pre-built actions, so you'll write a short fetch call. You'll use the Copper Opportunities endpoint with a filter for 'Open' status. Store your Copper API key and user email as Pipedream environment variables before writing the code — never hardcode credentials in a step.

  1. 1Click '+ Add Step' below the Schedule trigger
  2. 2Select 'Run Node.js code' from the step type list
  3. 3Click the step title and rename it to 'fetch_copper_opportunities'
  4. 4Open Pipedream Settings > Environment Variables and add COPPER_API_KEY and COPPER_USER_EMAIL
  5. 5Paste your API fetch code into the code editor (see Pro Tip section)
What you should see: After clicking 'Test', the step output panel on the right shows a JSON array of opportunity objects with fields like name, monetary_value, close_date, and assignee_id.
Common mistake — Copper's API returns a maximum of 200 records per page. If your pipeline has more than 200 open deals, you must implement pagination using the X-PW-PageNumber header — the step will silently return only the first 200 otherwise.
4

Workflow Canvas > + Add Step > Run Node.js code

Add a second Node.js step to query Copper for recent activities

Click '+ Add Step' again and add another 'Run Node.js code' step. This step hits the Copper Activities endpoint filtered to the past 24 hours — this powers the 'who logged activity yesterday' section of the digest. Set the min_activity_date filter to Date.now() minus 86400000 milliseconds converted to a Unix timestamp in seconds. Reference the previous step's output using steps.fetch_copper_opportunities.$return_value to get assignee IDs for cross-referencing.

  1. 1Click '+ Add Step' below the fetch_copper_opportunities step
  2. 2Select 'Run Node.js code'
  3. 3Rename the step to 'fetch_copper_activities'
  4. 4Set the date filter to yesterday using Math.floor((Date.now() - 86400000) / 1000)
  5. 5Click 'Test' to confirm activity records appear in the output panel
What you should see: The step output shows an array of activity objects, each with parent (linked deal name), user_id, activity_date, and type fields.
5

Workflow Canvas > + Add Step > Run Node.js code

Add a Node.js step to aggregate pipeline metrics

Add a third 'Run Node.js code' step and rename it 'aggregate_metrics'. This step pulls from both previous steps' outputs and computes: total open pipeline value, count of deals closing within 7 days, and a list of rep IDs who logged zero activity in the past 24 hours. Use steps.fetch_copper_opportunities.$return_value and steps.fetch_copper_activities.$return_value to access prior outputs. Export a clean summary object — this is what gets formatted into the Slack message.

  1. 1Click '+ Add Step' below fetch_copper_activities
  2. 2Select 'Run Node.js code' and rename it to 'aggregate_metrics'
  3. 3Reference prior steps using steps.fetch_copper_opportunities.$return_value and steps.fetch_copper_activities.$return_value
  4. 4Export a summary object with totalPipelineValue, dealsClosingThisWeek, and inactiveReps
  5. 5Click 'Test' and verify the output panel shows your computed summary object
What you should see: The output panel shows a flat object like { totalPipelineValue: 284500, dealsClosingThisWeek: 4, inactiveReps: ['Alex T.', 'Maria C.'] }.
Common mistake — Copper returns assignee_id as a numeric ID, not a name. You must resolve IDs to names in this step using a lookup against the /people or /users endpoint, or your Slack message will show numbers instead of rep names.
6

Workflow Canvas > + Add Step > Slack > Send Message (chat.postMessage)

Connect Slack via a Pipedream Connected Account

Click '+ Add Step' and search for 'Slack'. Select the Slack app and choose the 'Send Message' action. In the step configuration panel, click 'Connect Account' under the Slack section — this opens an OAuth popup that asks you to authorize Pipedream to post messages on behalf of your workspace. You need to be a Slack workspace admin or have the bot:write and chat:write scopes available. Pipedream stores the token under Connected Accounts once authorized.

  1. 1Click '+ Add Step' below aggregate_metrics
  2. 2Search for 'Slack' in the app search box
  3. 3Select the 'Slack' app
  4. 4Choose the 'Send Message' action (chat.postMessage)
  5. 5Click 'Connect Account' in the step config panel
  6. 6Authorize Pipedream in the OAuth popup that appears
What you should see: After authorization, you should see your Slack workspace name listed under Connected Accounts with a green checkmark inside the step config panel.
Common mistake — If your Slack workspace enforces app approval, an admin must whitelist the Pipedream app before the OAuth flow completes. This is a Slack admin setting, not a Pipedream setting — check with your IT team first.
Pipedream settings
Connection
Choose a connection…Add
click Add
Slack
Log in to authorize
Authorize Pipedream
popup window
Connected
green checkmark
7

Workflow Canvas > Slack Step > Step Configuration Panel

Configure the Slack message content and target channel

In the Slack Send Message step, set the Channel field to your target channel (e.g. #sales-leadership or #pipeline). For the Text field, reference steps.aggregate_metrics.$return_value to build the message body. Use Slack's mrkdwn format for bold and bullet formatting — wrap section headers in *asterisks* and use newline characters to separate sections. Build the message string inside the step's prop fields using Pipedream's expression editor (the double-brace {{ }} syntax).

  1. 1Type your channel name in the 'Channel' field (e.g. #sales-leadership)
  2. 2Click the 'Text' field and switch to expression mode using the {{ }} toggle
  3. 3Reference steps.aggregate_metrics.$return_value.totalPipelineValue for the pipeline total
  4. 4Add mrkdwn formatting: wrap section headers in *asterisks*
  5. 5Set 'Username' to 'Pipeline Bot' and optionally add a chart emoji as the icon
What you should see: The step preview panel shows a formatted message string with your pipeline metrics populated from the aggregate_metrics step output.
Common mistake — Map fields using the variable picker — don't type field names manually. Hand-typed variable names often have invisible spacing errors that produce blank output.
8

Workflow Canvas > + Add Step > Slack > Send Message (error fallback)

Add error handling with a fallback notification step

Copper's API occasionally times out or returns 429 rate-limit errors. Without a fallback, a failed run produces a silent failure — no digest posts, no alert. Add a final Slack step below the main Send Message step. In its settings, set it to run 'on failure' using Pipedream's step-level error routing. This fallback step posts a short error message to a separate ops channel (e.g. #automation-alerts) with the error type and timestamp so someone knows the digest failed.

  1. 1Click '+ Add Step' below the main Slack Send Message step
  2. 2Select 'Slack' > 'Send Message' again
  3. 3Click the step settings gear icon and enable 'Run this step on error'
  4. 4Set the channel to #automation-alerts
  5. 5Set the text to 'Pipeline digest failed at {{steps.trigger.context.ts}} — check Pipedream logs'
What you should see: The canvas shows the fallback Slack step with an orange 'On Error' badge, and it is visually connected below the main Slack step.
9

Workflow Canvas > Test Workflow (top toolbar)

Test the full workflow end-to-end

Click 'Test Workflow' at the top of the canvas. Pipedream runs all steps in sequence using live data from Copper's API and posts a real message to your Slack channel. Check each step's output panel for errors before checking Slack — the output panel shows exactly what each step returned. Verify the Slack message shows realistic numbers, not nulls or [object Object], which means a field reference path is broken.

  1. 1Click the 'Test Workflow' button in the top toolbar
  2. 2Watch the step execution status indicators turn green sequentially
  3. 3Click each step's output panel to inspect the returned data
  4. 4Check your target Slack channel to confirm the digest message posted
  5. 5Verify numbers match what you see in Copper's pipeline view
What you should see: All steps show green checkmarks, the Slack message appears in your target channel within 30 seconds, and the pipeline totals match a manual check in Copper.
Common mistake — If you test on a weekend, some Copper activity filters may return empty arrays — that is expected behavior, not a bug. Add a weekday guard in the aggregate_metrics step using new Date().getDay() if you want the workflow to skip Saturday and Sunday runs.
Pipedream
▶ Deploy & test
executed
Slack
Copper
Copper
🔔 notification
received
10

Workflow Canvas > Deploy > Logs Tab

Deploy and verify the schedule

Click 'Deploy' in the top-right corner. The workflow status changes from 'Draft' to 'Active' and the Schedule trigger shows the next run time. Go to the workflow's 'Logs' tab after the first scheduled run to confirm the execution completed without errors. Pipedream retains logs for 30 days on the free tier and indefinitely on paid plans — check logs the morning after deployment to confirm the digest posted at the right time.

  1. 1Click the 'Deploy' button in the top-right corner of the canvas
  2. 2Confirm the workflow status shows 'Active'
  3. 3Note the 'Next run' timestamp shown in the Schedule trigger block
  4. 4The following morning, open the workflow's 'Logs' tab to confirm a successful run
  5. 5Cross-check the Slack channel to confirm the digest posted at the expected time
What you should see: The Logs tab shows a completed run with all steps green, and your Slack channel has a digest message timestamped at your configured schedule time.
Common mistake — Confirm your workflow timezone matches your business timezone — n8n uses the instance timezone by default. Also verify the workflow is saved and set to Active, since Schedule Triggers won't fire on inactive workflows.

Paste this into a single Node.js step called 'fetch_and_aggregate' to replace steps 3–5. It handles Copper API pagination, resolves assignee IDs to names in one batch call, computes weighted forecast, and returns a ready-to-format summary object. Place it directly after the Schedule trigger step on the Pipedream canvas.

JavaScript — Code Stepimport axios from 'axios';
▸ Show code
import axios from 'axios';
export default defineComponent({
  async run({ steps, $ }) {

... expand to see full code

import axios from 'axios';

export default defineComponent({
  async run({ steps, $ }) {
    const headers = {
      'X-PW-AccessToken': process.env.COPPER_API_KEY,
      'X-PW-UserEmail': process.env.COPPER_USER_EMAIL,
      'X-PW-Application': 'developer_api',
      'Content-Type': 'application/json',
    };

    // Fetch all open opportunities (handles up to 500 via 3 pages)
    let opportunities = [];
    for (let page = 1; page <= 3; page++) {
      const res = await axios.post(
        'https://api.copper.com/developer_api/v1/opportunities/search',
        { page_size: 200, page_number: page, statuses: ['Open'] },
        { headers }
      );
      opportunities = opportunities.concat(res.data);
      if (res.data.length < 200) break; // no more pages
      await new Promise(r => setTimeout(r, 500)); // respect rate limit
    }

    // Fetch users to resolve assignee IDs to display names
    const usersRes = await axios.get(
      'https://api.copper.com/developer_api/v1/users',
      { headers }
    );
    const userMap = {};
    for (const user of usersRes.data) {
      userMap[user.id] = user.name;
    }

    // Fetch yesterday's activities
    const yesterday = Math.floor((Date.now() - 86400000) / 1000);
    const activitiesRes = await axios.post(
      'https://api.copper.com/developer_api/v1/activities/search',
      { page_size: 200, min_activity_date: yesterday },
      { headers }
    );
    const activeUserIds = new Set(
      activitiesRes.data.map(a => a.user_id)
    );

    // Aggregate metrics
    const now = Date.now() / 1000;
    const sevenDays = now + 7 * 86400;
    let totalPipeline = 0;
    let weightedForecast = 0;
    const closingThisWeek = [];
    const allAssigneeIds = new Set();

    for (const deal of opportunities) {
      const value = parseFloat(deal.monetary_value || 0);
      const prob = (deal.win_probability || 0) / 100;
      totalPipeline += value;
      weightedForecast += value * prob;
      allAssigneeIds.add(deal.assignee_id);
      if (deal.close_date && deal.close_date <= sevenDays) {
        closingThisWeek.push({
          name: deal.name,
          value,
          rep: userMap[deal.assignee_id] || `ID:${deal.assignee_id}`,
        });
      }
    }

    const inactiveReps = [...allAssigneeIds]
      .filter(id => !activeUserIds.has(id))
      .map(id => userMap[id] || `ID:${id}`);

    return $.flow.exit({
      totalPipeline: Math.round(totalPipeline),
      weightedForecast: Math.round(weightedForecast),
      closingThisWeek,
      inactiveReps,
      dealCount: opportunities.length,
      date: new Date().toLocaleDateString('en-US', { month: 'long', day: 'numeric', year: 'numeric' }),
    });
  },
});

Scaling Beyond 200+ open deals in Copper pipeline+ Records

If your volume exceeds 200+ open deals in Copper pipeline records, apply these adjustments.

1

Implement Copper API pagination

Copper returns a maximum of 200 records per page. Use the X-PW-PageNumber header incrementing from 1 until a page returns fewer than 200 records. Add a 500ms delay between requests to avoid hitting the 600 requests/minute rate limit.

2

Cache the user ID-to-name lookup

The /users endpoint returns all workspace users in a single call. Make this call once at the start of the step and build a lookup object — do not call /users once per deal, or you will exhaust your rate limit quickly on large pipelines.

3

Limit Slack message length for large pipelines

Slack messages have a 4,000-character limit for the text field and a 3,000-character limit per Block Kit section. For pipelines with 50+ deals closing this week, truncate the list to the top 10 by value and add a '+ X more — see Copper' link at the bottom.

4

Move to batch processing for 500+ deals

Pipedream code steps have a 30-second execution timeout on standard plans. A full paginated pull of 600 deals with name lookups can exceed this. Split the work into two steps: one step fetches and stores raw opportunity data, the second step aggregates — this keeps each step well under the timeout.

Going live

Production Checklist

Before you turn this on for real, confirm each item.

Troubleshooting

Common errors and how to fix them.

Frequently Asked Questions

Common questions about this workflow.

Analysis

VerdictWhy n8n for this workflow

Use Pipedream for this if your team has someone comfortable reading and editing Node.js. The Copper API has no native Pipedream app with pre-built actions, which means you are writing fetch calls — but that also means you get full control over filtering, pagination, and metric calculation without fighting a GUI's limitations. Pipedream is also the right call if you want to add conditional logic later, like skipping the digest on holidays or routing to different channels based on pipeline health. If nobody on your team touches code and you want a GUI-driven setup, Make handles scheduled Copper API calls through its HTTP module with less friction.

Cost

Pipedream's free tier includes 10,000 credits per month. Each run of this workflow consumes roughly 500–1,500 credits depending on how many pagination loops your Copper pipeline requires. At 1,000 credits per run, a daily weekday schedule (22 runs/month) costs about 22,000 credits — which exceeds the free tier. The Basic plan at $19/month includes 100,000 credits, which easily covers this workflow plus several others. Make's Core plan at $9/month includes 10,000 operations and handles the same schedule comfortably if you keep the workflow under 5 modules. Zapier's Starter plan ($19.99/month) covers it too, but you get far less flexibility. Pipedream costs the same as Zapier's entry tier and gives you Node.js access that Zapier gates behind a higher plan.

Tradeoffs

Make's biggest advantage here: it has a visual HTTP module that makes Copper API calls without writing code, and its built-in date formatters handle the Unix timestamp conversions Copper uses without a single line of JavaScript. Zapier has a pre-built Copper integration with a 'Search Opportunities' action — no API call required, but the filtering options are limited and you cannot paginate. n8n's Code node handles this workflow well and the self-hosted option is free, but n8n requires infrastructure management that's overkill for a daily Slack digest. Power Automate has no Copper connector at all — you'd build every call through the HTTP action, which is the same effort as Pipedream but with a slower, less developer-friendly environment. Pipedream wins here because the free tier, Node.js flexibility, and instant log access make iterating on this workflow fast.

Three things you will hit after setup. First: Copper's Unix timestamps are in seconds, not milliseconds. JavaScript's Date.now() returns milliseconds. Multiply or divide by 1000 in the wrong direction and your 'closing this week' filter either returns nothing or returns every deal ever. Second: Copper's API enforces a 600 requests/minute limit per API key — not per workflow run. If other tools or team members are also hitting the Copper API with the same key, your paginated workflow run can get rate-limited mid-execution. Use a dedicated API key for this workflow. Third: Slack's Block Kit formatting looks great in preview but some older Slack clients render mrkdwn inconsistently. If leadership uses the Slack desktop app and the digest looks broken for one person but not another, the issue is usually a client version difference — fall back to plain text formatting in the message body and reserve Block Kit for the attachment payload.

Ideas for what to build next

  • Add a weekly deal forecast summaryExtend the workflow with a separate Monday-only branch that pulls Copper opportunities by weighted close probability and posts a forecast vs. quota comparison to #sales-leadership every Monday morning.
  • Trigger rep-specific DMs instead of a channel postModify the Slack step to send a direct message to each rep with only their own deals — use Slack's conversations.open endpoint with the rep's Slack user ID and match it against the Copper assignee name.
  • Log digest data to Google Sheets for trend trackingAdd a Google Sheets step after the Slack step to append each day's pipeline total, deal count, and forecast to a running spreadsheet — gives you a free historical trend view without a BI tool.

Related guides

Was this guide helpful?
Slack + Copper overviewPipedream profile →