This is a submission for Weekend Challenge: Earth Day Edition
What I Built
Have you ever walked through a street or a park, spotted a plastic bag tangled in a bush, a few bottles scattered on the grass and felt that small but familiar pang of helplessness? You might pick one up. But then you wonder: what about the rest? What if others could see this too? What if we could work on it together?
Keeping our planet clean is one step closer to taking better care of this beautiful home we all share. This is what Terrae helps us do.
Terrae is an Android app that lets people report and clean up trash together. You spot litter, you open the app, take a photo, and in seconds Google Gemini AI has analyzed the severity and pinned it to a shared map. Someone nearby can claim the spot, take an after-photo to prove it's clean, and earn points and badges.
Core features:
- Report: Photo a spot that has trash β Gemini 2.5 Flash rates severity (LOW / MEDIUM / HIGH), generates a description, and pins it to the map
- Shared Map: Real-time Google Maps markers color-coded by severity, with a peek-able bottom sheet showing nearby reports
- Feed: Image-first list of reports sorted by distance
-
4-Gate Clean Validation: the app enforces four gates before awarding points when requesting to clean a place:
- You can't clean your own report
- You must be within 100m of the trash location
- You must take an after-photo as proof
- Gemini AI must confirm the area actually looks clean
- Points + Badges: Reporting earns 5β20 pts, cleaning earns 10β50 pts (scaled by severity). Badges: First Report, First Clean, Earth Warden (5 cleans), Green Guardian (10 cleans)
- Profile: Points counter, stats, badge grid, share CTA
Demo
| flow when user cleans a place | flow when reporting a place |
|---|---|
| |
|
Pictures of the different app screens:
| Map Screen | Feed Screen | Profile Screen |
|---|---|---|
![]() |
![]() |
![]() |
Code
yousrasd
/
Terrae
An Android app to keep our Earth clean by bringing people together to remove trash they find
Terrae
Terrae is an Android app for reporting trash hotspots and coordinating cleanup. Users can capture or select a photo, let Gemini classify the severity, publish the report to a shared map and feed, and earn points and badges by contributing cleanups.
What the app does
-
Report trash
- Start from the map
- Choose Camera or Gallery
- Run Gemini image analysis
- Confirm and publish the report
-
Browse nearby reports
- View reports on the map with severity-colored markers
- Browse the same data in the feed
- Open a report detail view from map/feed
-
Track cleanup progress
- Open a report
- Tap I'll Clean This
- Confirm readiness from a bottom sheet before camera capture
- Validate cleanup with proximity checks, photo proof, and Gemini verification
-
Earn rewards
- Reporting and cleaning award points
- User profile tracks reports, cleans, and badges
Core product flows
1. Report flow
Map -> Report -> Source picker -> Camera/Gallery -> AI analysisβ¦
How I Built It
The Stack
Kotlin + Jetpack Compose for the UI, Firebase (Firestore, Storage, anonymous Auth) for the backend, Google Maps Compose for the shared map, Gemini 2.5 Flash via OkHttp for the AI brain and Copilot CLI for agentic development.
Architecture is MVVM with a use case layer: domain interfaces β use cases β single ViewModel β Compose screens. Hilt wires everything together.
The Workflow
I used GitHub Copilot agent mode with three custom agents, each with a specific role, all grounded by a shared copilot-instructions.md that encoded the full architecture, data models, screen structure, and coding conventions:
| Agent | Role |
|---|---|
android-implementor |
Implements each story: writes the Kotlin/Compose code, write unit and component tests, verifies the build compiles |
maestro-qa |
After each story ships, writes and runs Maestro UI test flows against a real physical device |
doc-agent |
Invoked once at the end to generate project documentation |
The pipeline per story:
Plan story β android-implementor builds it β I review + validate β
maestro-qa writes & runs tests β tests pass β next story
I started by using Copilot Cli's plan mode to break the project into 16 stories with a clear dependency graph. From there, I fed each story to the dev agent. It would implement, confirm the build succeeded, and present what it built. I'd review, test on my physical device, and give the go-ahead. Then the QA agent automated the verification.
I stayed in the loop as the decision-maker at every handoff. The agents knew the architecture (from copilot-instructions.md), respected the patterns, and flagged when something didn't fit. I orchestrated the flow and validated every step.
It was bit challenging at times. One challenge was keeping the agents aligned with the architecture. Early on, they would drift and generate inconsistent patterns, so I had to reinforce strict rules inside copilot-instructions.md.
Prize Categories
Best Use of Google Gemini
Gemini 2.5 Flash does two critical jobs in Terrae through two dedicated agents:
ScanAgent β When you report a trash spot, Gemini looks at your photo and returns a structured JSON response with severity (LOW/MEDIUM/HIGH), a human-readable description, and a confidence score. That severity drives the map marker color and the points reward. A stray bottle cap is LOW (5 pts to report, 10 to clean). A scattered pile of bags and debris is HIGH (20 pts to report, 50 to clean).
CleanAgent β When you claim to have cleaned a spot, Gemini looks at your after-photo and decides whether the area actually looks clean. It returns isClean, confidence, and reason. The threshold is 0.6 β below that, the clean is rejected and the user is told why.
Both agents use the same model (gemini-2.5-flash) via OkHttp with base64-encoded images and structured text prompts. Both return JSON that the app parses into typed Kotlin data classes.
Thanks to Gemini, the app can autonomously judge severity of pictures, and validate that work was actually done.
Best Use of GitHub Copilot
I used Copilot's agent mode with three custom agents, each with a defined role, all grounded in a shared copilot-instructions.md that encoded the full architecture, data models, and conventions:
- android-implementor β the dev agent, implementing each story in Kotlin/Compose
- maestro-qa β the QA agent, writing and running Maestro UI tests against a real device after each story
- doc-agent β the documentation agent, invoked once at the end
I first started the plan mode through Copilot CLI and came up with solid plan thanks to copilot. Once the plan was defined and stories broken down,
the pipeline per story was then: dev agent implements (and makes sure the build is successful) β I review and validate β QA agent runs tests β tests pass β next story. I stayed in the loop as the decision-maker at every step.
At the end, doc agent writes final doc of the entire project.
What i liked about this workflow is that, even though I delegated all the work to AI, I was there orchestrating the different flows (dev -> qa-> doc) and validating any AI work.
Copilot CLI made the entire developement workflow fun.
You can see the details workflow here.
What's Next
That was a fun challenge. Thanks DEV community for such a great theme idea.
I would love to keep building on it. For example, adding actual account creation, a leaderboard.
And of course expand it to iOS. Right now it's Android only, but the idea works on any phone.
This was one of the most fun builds Iβve done in a while, especially experimenting with AI agents in the loop for mobile app developement.
I'm curious to hear from other developers. How are you integrating AI agents into your daily coding workflow?










![Defluffer - reduce token usage π by 45% using this one simple trick! [Earthday challenge]](https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiekbgepcutl4jse0sfs0.png)


