top of page

Teaching While Monitoring: AI Plant Identification for Conservation NGOs Who Want Communities to Learn, Not Just Report
Digital MRV 

Project Type: AI Plant Identification | Citizen Science Biodiversity Monitoring | Education-First Species Data Collection
Location: Northern Norway (Lofoten archipelago, 68°N) — subarctic coastal ecosystem
Key capabilities shown: AI-powered species suggestions with probability scores, full taxonomy data, species description, edible parts identification, GPS coordinate capture, step-by-step guided survey flow.
IMG_3392.HEIC

Why Education Comes Before Data Collection

​

There's a tension at the heart of most community biodiversity monitoring programmes that rarely gets discussed openly.

On one side, conservation NGOs and project developers need accurate, GPS-tagged species data — the kind that can support biodiversity co-benefit claims, satisfy CCB Standards validators, and demonstrate that conservation projects are actually protecting ecosystems rather than just asserting they are. The data needs to be reliable, consistent, and collected at scale.

​

On the other side, communities are being asked to collect data about things they may not yet fully understand. A community member who is told to report which plants they see, but has no real understanding of why certain species matter or what their presence indicates about ecosystem health, is doing something fundamentally different from genuine environmental stewardship. They're performing a task, not building a relationship with their landscape.

​

CitizenClimate's approach is to collapse the distinction between those two things. The education and the monitoring happen at the same moment, in the same interaction. When a community member uses the plant identification feature, they're not just submitting a data point — they're learning something real about the species they've encountered. Over time, repeated identification experiences build genuine ecological literacy, and community members who understand their ecosystems become increasingly effective monitors of them.

​

The AI plant identification feature is where this principle is most directly expressed.

What the Plant Identification Feature Does

​

The plant identification feature allows community members to photograph any plant in the field and receive an AI-generated list of species suggestions, ranked by probability, with supporting information that helps the user understand what they're looking at and decide whether the identification is correct.

​

Each suggestion in the list includes a probability score — a percentage confidence that the photographed plant matches the suggested species. It includes a reference photograph of the suggested species for visual comparison. It includes common names in multiple languages, enabling users to connect the AI identification to their own local knowledge of the plant. It includes a species description that places the plant in its ecological and geographic context. It includes full taxonomy data — class, family, genus, kingdom, order, and phylum — providing the scientific framework that connects a single field observation to the broader organisation of plant life. And where relevant, it includes practical information about edible parts — which parts of the plant are edible, if any.

​

The user reviews these suggestions, draws on their own knowledge and the reference information provided, selects the identification they believe is correct, and submits. The platform logs the identification alongside the precise GPS coordinates of the observation, the photograph, and the timestamp — creating a verified, location-specific biodiversity record that feeds into the project's monitoring dataset.

​

This is citizen science in the most literal sense: scientific data collection conducted by non-specialist community members, supported by AI tools that make specialist knowledge accessible in the field.

Ground-level field photograph of Cornus suecica dwarf cornel plant growing in subarctic co
From Photo to Species: The Five-Step Identification Flow
​

The plant identification process is built as a guided five-step survey flow, visible in the screenshot as Step 3 of 5. The stepped structure is deliberate — it breaks what could feel like a complex or technical task into manageable stages, and it ensures that users engage meaningfully with each part of the process rather than rushing through to submission.

​

The flow takes a community member from field observation to verified GPS-tagged species record in a single coherent experience. The photograph is taken. The AI analyses it. The suggestions are presented. The user makes an informed selection. The record is submitted and logged. Each step is explicit, clearly labelled, and designed for users who may be doing this for the first time — or for the hundredth time, in which case the familiarity of the flow makes the process efficient.

​

The step indicator — Step 3 of 5 with a progress bar — gives users a clear sense of where they are and how much remains. This is a small design detail that has a meaningful effect on completion rates. Users who can see they are three-fifths through a process are significantly more likely to complete it than users who have no sense of how much further they have to go.

​

Probability Scores and Multiple Suggestions: Handling Uncertainty Honestly

​

One of the most important design decisions in the plant identification feature is the explicit display of probability scores alongside each suggestion.

 

The screenshot shows two suggestions returned for a photographed plant in northern Norway. The first — Cornus suecica, the dwarf cornel or bunchberry, a small flowering plant in the dogwood family native to subarctic and cool temperate regions of Europe and Asia — carries a probability of 51.38%. The second — feathery false lily of the valley, also known as Solomon's plume or false Solomon's seal — carries 35.17%.

​

A probability of 51.38% is not certainty. It's the AI's best assessment based on the image, but it leaves room for the user to disagree, to apply their own local knowledge, or to choose the second suggestion if it better matches what they're seeing. This is the right design. An AI that presents single definitive identifications without confidence scores creates a false sense of certainty and removes the user from the identification process. An AI that shows its working — here are the two most likely candidates, here is how confident I am in each, here is the reference information you need to make your own judgement — invites the user to be an active participant in the identification rather than a passive recipient of a machine answer.

​

This matters both for data quality and for education. A user who has compared the reference photograph of Cornus suecica against the plant they photographed, read the species description, noted the characteristic red berry clusters, and actively selected the identification has learned something. A user who was simply told "this is dwarf cornel" by an app has not.

​

Taxonomy, Species Descriptions, and Edible Parts: What Communities Learn

​

The information returned alongside each species suggestion is designed to be genuinely educational, not just functionally useful for identification purposes.

​

The taxonomy data — in the case of Cornus suecica, class Magnoliopsida, family Cornaceae, genus Cornus, kingdom Plantae, order Cornales, phylum Tracheophyta — connects a single plant observation to the scientific classification system that organises all plant life. For a community member who has never encountered these categories before, seeing them displayed consistently alongside every identification they make gradually builds familiarity with the structure of botanical science. After fifty or a hundred identifications, the concepts of genus, family, and order become intuitive rather than abstract.

​

The species description — in this case, information on Cornus suecica as a species of flowering plant in the dogwood family Cornaceae, native to cool temperate and subarctic regions of Europe and Asia and locally present in extreme northeastern and northwestern North America — places the plant in its geographic and ecological context. Community members learn not just what a plant is called, but where it lives, what its distribution is, and what kind of environment it belongs to.

​

The edible parts information — for Cornus suecica, the fruit is edible — is practically valuable and adds a dimension of relevance that purely scientific information sometimes lacks. In many of the communities where CitizenClimate is used, local knowledge of which plants are edible is already present. The edible parts field creates a connection between that traditional knowledge and the scientific identification system, reinforcing both.

​

GPS-Tagged Records: Each Identification as a Biodiversity Data Point

​

Every plant identification submitted through the app is automatically accompanied by the precise GPS coordinates of the observation. The screenshot demonstrates this with a coordinate precision that speaks to the reliability of the location data: 68.169649275664°N, 14.693419301429874°E — a location in the Lofoten archipelago of northern Norway, above the Arctic Circle.

​

This level of coordinate precision is what distinguishes citizen science data collected through a purpose-built platform from informal observations or manually transcribed records. The GPS coordinates are captured automatically at submission, tied to the photograph and the species identification, and stored as a permanent, auditable record. The observation can be mapped, analysed spatially, compared to previous observations from the same location, and used as part of a longitudinal biodiversity record for the survey area.

​

For conservation NGOs and project developers, this GPS-tagged species data is genuinely useful. Biodiversity baseline surveys that would traditionally require expensive specialist field teams can be supplemented — or in some contexts, partially replaced — by community monitoring data collected through the platform. CCB Standards require evidence of biodiversity co-benefits, and GPS-tagged community species records, accumulated over time, provide exactly that evidence in an auditable, location-specific format.

​

The coordinate precision shown here — to fifteen decimal places of latitude and longitude — is a function of modern smartphone GPS hardware. It places each observation with greater spatial precision than most traditional field survey methods can achieve.

​​

Ground-level field photograph of Cornus suecica dwarf cornel plant growing in subarctic co
CitizenClimate mobile app Plant Identification screen showing Step 3 of 5 in guided survey
Species description text displayed in CitizenClimate plant identification feature reading_

What This Looks Like in Practice: Cornus suecica in Northern Norway

​

The screenshots accompanying this feature page were captured during a real plant identification session in the Lofoten archipelago of northern Norway, at a latitude of 68°N — well above the Arctic Circle, in a subarctic coastal ecosystem characterised by low shrubs, mosses, lichens, and the characteristic plant communities of the boreal-subarctic transition zone.

​

The plant being identified is Cornus suecica — dwarf cornel, also known as bunchberry, Lapland cornel, or Swedish cornel. It's a small, low-growing perennial plant of the dogwood family, rarely more than 15-20cm tall, recognisable by its characteristic whorled leaves and, in late summer and autumn, by the tight clusters of bright red berries that the photograph captures clearly. It grows in mossy, acidic woodland and heath habitats across Scandinavia, the British uplands, northern Russia, and locally in northeastern North America.

​

The photograph itself — shown alongside the AI identification screen — was taken at ground level in a mossy subarctic heath, capturing the plant growing amongst characteristic Sphagnum and other mosses with lingonberry and other dwarf ericaceous shrubs visible in the background. The image quality is good enough for the AI to return a confident first suggestion at 51.38% — but not so unambiguous that the AI can be certain, which is ecologically realistic for a plant that grows alongside several similar low-growing species in this habitat.

​

This is exactly the scenario the plant identification feature is designed for: a community member or citizen scientist in a complex, species-rich habitat, encountering a plant they may or may not recognise, using the app to narrow down the identification, learn about the species, and contribute a GPS-tagged observation record to the project dataset.

Building Ecological Literacy Over Time

​

A single plant identification is a data point. A hundred plant identifications, made by the same community member over weeks and months, is something more: the beginning of genuine ecological literacy.

​

The platform's gamification system — badge progression, reward points, and the visible accumulation of survey completions — is designed to sustain engagement across the timescales that meaningful ecological learning requires. Community members who are motivated to keep submitting identifications are community members who keep encountering new species, keep learning new taxonomy, keep building their understanding of which plants grow where and why.

​

Over time, this creates a community of monitors who know their local ecosystems in a way that is both scientifically structured and practically grounded — people who can identify the difference between Cornus suecica and feathery false lily of the valley not because an app told them once, but because they've photographed both repeatedly, read about both, and built the visual and ecological knowledge that makes the distinction intuitive.

​

For conservation NGOs, this trajectory matters. A community that understands its ecosystem is a community that has reasons to protect it beyond the externally imposed incentives of a carbon or conservation project. The education-first principle isn't just good pedagogy — it's a practical strategy for building the long-term community ownership that makes conservation projects work across the multi-decade timescales they actually require.

Using Plant Identification for Biodiversity Baseline Surveys and Co-Benefit Documentation

​

For conservation project developers specifically, the plant identification feature supports two practical use cases that are often difficult and expensive to address through traditional means

​

The first is biodiversity baseline surveys. Most conservation projects are required to document the biodiversity present in the project area at baseline — what species are present, where, and in what abundance.

 

 

Traditional baseline surveys require specialist botanists and zoologists conducting structured field surveys over defined periods, at significant cost. Community-reported plant identification data, collected continuously by local monitors who spend time in the project area every day, provides a complementary dataset that extends spatial and temporal coverage far beyond what specialist surveys can achieve. The GPS-tagged, timestamped records from the app are auditable and mappable — they can be presented to validators as evidence of species presence across the project area.

​

The second is co-benefit documentation. Projects pursuing CCB Standards or similar biodiversity co-benefit certifications need ongoing evidence that the project area is supporting plant and animal communities. Community plant identification records — accumulated over months and years, covering multiple species across the project area — provide exactly this evidence in a format that is continuously updated rather than frozen at the baseline survey date.

​

Neither of these use cases requires every community identification to be perfectly accurate. The probability scores and species suggestions create a transparent record of identification confidence. Records with high confidence scores from multiple independent observations carry stronger evidential weight than single uncertain ones. The platform's data can be filtered, analysed, and presented to validators in ways that appropriately reflect the confidence levels of the underlying identifications.

Species description text displayed in CitizenClimate plant identification feature reading_

Which Projects and Contexts Is This Built For?

​

The plant identification feature works for any conservation project or NGO programme where community biodiversity monitoring is part of the scope.

​

It is most immediately relevant for projects pursuing CCB Standards, where biodiversity co-benefit documentation is a validation requirement. For REDD+ projects, AFOLU conservation initiatives, and peatland or wetland restoration projects, community plant identification records contribute to the species diversity evidence base that validators review.

​

It works across climatic zones and ecosystem types. The species AI is trained on plant species from tropical forests, temperate woodlands, subarctic heaths, dryland savannahs, and wetland ecosystems. The taxonomy framework is universal. The GPS tagging works identically in Kalimantan, central India, the Amazon basin, or, as demonstrated here, above the Arctic Circle in northern Norway.

​

It works for community monitors at any level of prior ecological knowledge. The guided five-step flow, probability scores, reference photographs, and species descriptions are designed to support users who are identifying plants for the first time. The same tools provide useful supplementary information for users who already have strong local botanical knowledge. The platform meets community members where they are and builds from there.

 

And it integrates naturally with the broader CitizenClimate survey framework — plant identification observations can sit alongside biodiversity surveys, habitat condition assessments, threat monitoring records, and SDG impact surveys in a single project monitoring package.

Ground-level field photograph of Cornus suecica dwarf cornel plant growing in subarctic co

Your Project 

Could Work Like This

If you're working on a climate or environmental project that needs verified community data, you're probably facing similar questions to the ones in this case study.

​

How do you prove your project is working beyond just the technical metrics? What data do your funders need for carbon credits or ESG reporting? How do you catch problems on the ground before they undermine your results? Most importantly—how do you ensure the people affected by your project actually understand and benefit from it?

​

The difference between projects that succeed and ones that struggle often comes down to whether you're measuring the right things. Carbon calculations tell you about emissions. Community feedback tells you whether the intervention is actually working in practice. Education ensures that feedback is informed, not just reactive.

​

We've built the survey systems, education modules, and geotagged monitoring tools that made this project work. The same approach adapts to your context—different activities, different locations, different communities, different objectives.

​

What you get:

  • Custom education modules that teach participants about what they're monitoring and why it matters

  • Multilingual surveys designed for offline use in areas with limited connectivity

  • GPS-tagged responses that show location-specific patterns and problems

  • Anonymous feedback systems that protect privacy whilst collecting honest data

  • Verified data packages that meet carbon credit, MRV, and ESG reporting requirements

  • Operational insights that help you fix problems before they become failures

​

What your project needs:

  • A climate, environmental, or development initiative (planning stage or already operating)

  • Community members whose participation and feedback would strengthen your project

  • Funders or stakeholders who want proof of impact alongside technical metrics

​

The platform works whether you're monitoring 10 hectares or 10,000, whether you're in a remote village or an urban centre, whether your participants speak Spanish, English, French, Hindi, Indonesian, or Ukrainian.

​

Get Started

If you're working on a project that needs more than just technical data—where community engagement and verified feedback actually matter—let's talk about how this approach could work for you.

​

​​

​

​

Or if you're not sure whether this approach fits your situation, send us a quick message describing what you're trying to achieve. We'll tell you honestly whether education-based community monitoring makes sense for your context.

​

Email us: nick@citizenclimate.net

bottom of page