In part one, I covered the anonymous submission flow. In part two, I covered the public board, voting, and how I kept prioritization deliberately simple.
This part is about the piece I am most happy with: closing the loop.
The whole reason LoopSignal is called LoopSignal is that feedback is supposed to come back to the user who left it. Someone submits a request, the team works on it, it ships, and the original submitter hears about it. That round trip is the entire product.
For most teams, the place that work actually happens is GitHub. So if LoopSignal is going to close the loop, it has to live where the engineering work lives. That meant a real GitHub integration: linking posts to issues, automatically closing posts when issues close, and notifying everyone who cared.
Here is how I built it.
What "closing the loop" actually requires
Before writing any code, I listed what closing the loop really meant.
It needed to:
- let a team connect a GitHub repo to a project
- create a GitHub issue from an approved feedback post with one click
- listen for issue events from GitHub via webhooks
- automatically update the feedback post's status when the issue closes or reopens
- notify everyone subscribed to the post when its status changes
- handle anonymous subscribers and registered users equally
- not block the rest of the product if GitHub is slow or unreachable
That is more moving parts than the submission flow or the public board, but each piece is small if you keep them separate.
Connecting a repo: OAuth, scope, and storage
The connection flow is a standard GitHub OAuth dance, with one detail that matters.
The initiate route encodes both projectId and userId into the OAuth state parameter so the callback can verify access before binding a token to a project:
const state = Buffer.from(
JSON.stringify({ projectId, userId: user.id })
).toString("base64url");
const githubAuthUrl = new URL("https://github.com/login/oauth/authorize");
githubAuthUrl.searchParams.set("client_id", clientId);
githubAuthUrl.searchParams.set("redirect_uri", redirectUri);
githubAuthUrl.searchParams.set("scope", "repo");
githubAuthUrl.searchParams.set("state", state);
The callback decodes the state, verifies the user actually owns the project, exchanges the code for an access token, and stores the token on the projects row in a github_token column.
I chose scope: "repo" deliberately. A more granular scope would have been nicer, but creating issues, registering webhooks, and reading the repo all need it, and trying to split that into multiple flows would add friction without any real security gain for the kind of teams using this product.
Creating an issue from an approved post
Once a project has a connected repo, every approval can optionally produce an issue.
The logic lives in the moderation action. When a post moves from pending to approved, and the project has both github_repo and github_token set, it fires off an issue creation in the background:
if (newStatus === "approved" && project.github_repo && project.github_token) {
createGitHubIssue(post, project).catch(console.error);
}
That .catch(console.error) matters. The product's source of truth is the LoopSignal database, not GitHub. If GitHub is rate-limiting us or down, the post should still move to approved. The issue creation is an enhancement, not a dependency.
The actual creation just hits the GitHub REST API:
const res = await fetch(
`https://api.github.com/repos/${owner}/${repo}/issues`,
{
method: "POST",
headers: {
Authorization: `Bearer ${project.github_token}`,
Accept: "application/vnd.github+json",
},
body: JSON.stringify({
title: post.title,
body: `${post.description ?? ""}\n\n---\nCreated from LoopSignal feedback`,
labels: post.category ? [post.category] : undefined,
}),
}
);
After it succeeds, the post row gets a github_issue_url column set to the issue's html_url. That URL is the link between the two systems for the rest of the lifecycle.
The footer line in the issue body — "Created from LoopSignal feedback" — exists for a real reason. It tells the engineer reading the issue why it exists, and where to look if they want context like vote counts or comments. Without that, GitHub issues created from external sources tend to feel orphaned.
Registering the webhook
Connecting the repo also registers a webhook. This is the part that lets GitHub talk back to LoopSignal when something changes.
I check for an existing webhook first, because re-registering on every connect would create duplicates and re-hook on every page reload during testing:
const hooksRes = await fetch(`${repoApi}/hooks`, {
headers: { Authorization: `Bearer ${token}` },
});
const existing = await hooksRes.json();
const alreadyHooked = existing.some(
(h: { config?: { url?: string } }) => h.config?.url === webhookUrl
);
if (!alreadyHooked) {
await fetch(`${repoApi}/hooks`, {
method: "POST",
headers: { Authorization: `Bearer ${token}` },
body: JSON.stringify({
name: "web",
active: true,
events: ["issues"],
config: { url: webhookUrl, content_type: "json" },
}),
});
}
I only subscribe to the issues event. Pull requests, pushes, and the rest are not part of the loop right now, and subscribing to events you do not use is a great way to fill your logs with noise.
The webhook URL is built from the request host so this works the same way locally and in production without environment-specific config.
Reacting to issue events
The webhook handler is small on purpose. It gets one job: translate GitHub issue actions into LoopSignal workflow statuses.
let workflowStatus: WorkflowStatus | null = null;
if (action === "closed") {
workflowStatus = "completed";
} else if (action === "reopened") {
workflowStatus = "open";
}
if (!workflowStatus) {
return NextResponse.json({ ok: true });
}
const { data: post } = await admin
.from("posts")
.select("id")
.eq("github_issue_url", payload.issue.html_url)
.single();
if (post) {
await admin
.from("posts")
.update({ workflow_status: workflowStatus })
.eq("id", post.id);
}
A few things were intentional here.
I match posts to issues by the full html_url, not by issue number. Issue numbers are only unique within a repo, and a project could in theory swap which repo it is connected to. The HTML URL is globally unique and already stored when the issue was created.
I ignore actions other than closed and reopened. edited, assigned, labeled, and the rest can be useful eventually, but right now they would just create noise in the public board's history. Closing the loop only needs the two events that change visible state.
I also do not currently verify the webhook signature. That is on my list. For an early version with low traffic, the cost of a malicious webhook is bounded — at worst, someone could mark posts as completed — and the engineering cost of HMAC verification is small enough that I would rather wait until I am sure the webhook surface is stable before locking it in.
Notifying the people who care
A status change without notifications is just a database update. The whole reason for this system is that someone submitted feedback and wants to know what happened to it.
Notifications run from a small helper that fires after status changes:
const meaningfulStatuses = ["planned", "in_progress", "completed", "closed"];
if (!meaningfulStatuses.includes(newStatus)) return;
const { data: project } = await admin
.from("projects")
.select("notifications_enabled, name, slug")
.eq("id", post.project_id)
.single();
if (!project?.notifications_enabled) return;
const { data: subscriptions } = await admin
.from("post_subscriptions")
.select("id, user_id, email")
.eq("post_id", post.id)
.eq("unsubscribed", false);
Two things in here matter more than they might look.
First, only some statuses send email. open does not — every post starts as open, so emailing about it would be spam. The notification list is the four states that actually represent progress.
Second, the project itself has a notifications_enabled flag. Some teams want quiet boards, especially during early product development, and forcing them into outbound email would push them off the product.
Anonymous and registered subscribers in one table
This part took a bit of thought.
Subscribers can come from two places:
- registered users who voted while logged in
- anonymous users who left an email when submitting
Rather than create separate tables for each, I store both in a single post_subscriptions table with two nullable columns: user_id and email. At notification time, the code resolves whichever is set:
let recipientEmail = sub.email;
if (!recipientEmail && sub.user_id) {
const { data: { user } } =
await admin.auth.admin.getUserById(sub.user_id);
recipientEmail = user?.email ?? null;
}
if (!recipientEmail) continue;
await sendStatusUpdateEmail({
to: recipientEmail,
postTitle: post.title,
newStatus,
unsubscribeUrl: `${baseUrl}/api/unsubscribe?id=${sub.id}`,
});
That keeps the subscription model simple and lets registered users update their email in one place — their account — rather than having to update every subscription. Anonymous emails are stored directly because there is nothing else to look them up against.
Every email carries an unsubscribe link tied to the specific subscription row. One click marks unsubscribed = true and the row stops receiving future updates. That is non-negotiable for any system sending unsolicited mail to people who left an email a long time ago.
What the loop actually feels like now
When all of these pieces line up, the experience is the part of the product I am most proud of.
Someone submits feedback anonymously. A team member approves it. A GitHub issue gets created. The team works on the feature. They merge the PR and close the issue. GitHub fires the webhook. LoopSignal moves the post to completed. Everyone subscribed gets an email saying the thing they asked for shipped, with a link back to the post and an option to unsubscribe.
The user who submitted the original feedback never had to create an account, never had to log in to check status, never had to chase the team for an update. The system did the chasing for them.
That is the loop.
What I learned building this piece
A few things stood out while putting this together.
External integrations should fail soft. Issue creation, webhook delivery, and email sending all run async and non-blocking from the user-visible moderation action. If GitHub is unreachable, the post still gets approved. If email fails, the status still updates. The product database is the source of truth, and the integrations enhance it.
Subscribe to as little as possible. The webhook only listens for issues events, and the handler only acts on closed and reopened. Every event you do not handle is a future log line, a future bug, a future maintenance question. Smaller surface area is easier to reason about.
Identity layers should converge late. Anonymous emails and registered user IDs both live in the subscriptions table without being merged. They only resolve to a real recipient at the moment we send the email. That keeps the data model simple and avoids brittle account-linking logic.
The unsubscribe link is part of the product, not an afterthought. It needs to be in every email, work without an account, and be a one-click flow. Treating it as an edge case is how products end up with angry users and ignored mail.
What's next
In the next article, I will cover the changelog: how completed posts roll into a public, shareable list of what shipped, how it stays in sync with the board automatically, and why I decided the changelog is the changelog page rather than a separate authoring surface.
If you are building something similar, my takeaway from this part is simple:
The loop is the product.
Submission, voting, and prioritization all matter, but they only become valuable when the user who left feedback hears that something happened. Build that round trip first, and the rest of the product gets clearer.










