Build Your First AI:Flower Recognition in 100 Lines
A complete beginner's guide to building an AI that recognizes flowers with 98.6% accuracy. No PhD required—just Python and curiosity! 🌸
What You'll Learn
By the end of this tutorial, you'll have built a real AI model that can identify 5 different types of flowers. We'll cover everything from downloading data to deploying a web interface—all in about 100 lines of Python.
Perfect for: Students learning AI, developers curious about machine learning, or anyone who wants to build something cool without drowning in theory.
📚 Project Resources
Step 1: Install & Import Libraries
First, we need to install our tools. Think of these like apps on your phone—each one does a specific job.
# Install the libraries we need
!pip install ultralytics kagglehub gradio -q
# Import them into our code
import os # For working with files/folders
import shutil # For copying files
import kagglehub # For downloading datasets
import gradio as gr # For building the web interface
from sklearn.model_selection import train_test_split # For splitting data
from ultralytics import YOLO # The AI model we'll use💡 What's happening? We're installing Ultralytics (contains YOLO, our AI model), KaggleHub (to download flower images), and Gradio (to create a web interface). The -q flag means "quiet mode"—it won't spam us with installation messages.
Step 2: Download & Organize Data
AI models learn from examples. We'll download 4,000+ flower photos and organize them into folders.
# Download the flower dataset from Kaggle
print("--- Step 1: Downloading & Organizing Data ---")
path = kagglehub.dataset_download("alxmamaev/flowers-recognition")
# Find where the flowers folder is
raw_source = os.path.join(path, 'flowers') if os.path.exists(
os.path.join(path, 'flowers')
) else path
# Create folders for our organized data
base_dir = '/content/flower_data'
classes = ['daisy', 'dandelion', 'rose', 'sunflower', 'tulip']
# Make train and validation folders for each flower type
for split in ['train', 'val']:
for flower in classes:
os.makedirs(os.path.join(base_dir, split, flower), exist_ok=True)💡 What's happening? We're creating a folder structure like this:flower_data/
├── train/ (80% of images - for teaching the AI)
│ ├── daisy/
│ ├── dandelion/
│ └── ...
└── val/ (20% of images - for testing the AI)
# Split images: 80% for training, 20% for validation
for flower in classes:
flower_dir = os.path.join(raw_source, flower)
if not os.path.isdir(flower_dir):
continue
# Get all image files
imgs = [f for f in os.listdir(flower_dir)
if f.lower().endswith(('.jpg', '.jpeg', '.png'))]
# Split them randomly (80/20)
train_imgs, val_imgs = train_test_split(
imgs, test_size=0.2, random_state=42
)
# Copy files to the right folders
for img in train_imgs:
shutil.copy(
os.path.join(flower_dir, img),
os.path.join(base_dir, 'train', flower, img)
)
for img in val_imgs:
shutil.copy(
os.path.join(flower_dir, img),
os.path.join(base_dir, 'val', flower, img)
)🤔 Why split the data? We train the AI on 80% of images, then test it on the other 20% (images it's never seen). This tells us if the AI truly "learned" or just memorized. It's like studying with practice problems, then taking a test with new questions.
Step 3: Train the AI Model
This is where the magic happens! We use YOLOv8, a pre-trained model that already knows how to recognize objects.
print("--- Step 2: Training Model (this may take a few minutes) ---")
# Load a pre-trained YOLOv8 Nano model
model = YOLO('yolov8n-cls.pt')
# Train it on our flower data
model.train(
data=base_dir, # Where our organized images are
epochs=10, # How many times to look at all images
imgsz=224, # Resize images to 224x224 pixels
batch=32, # Process 32 images at a time
name='flower_model' # Save results with this name
)💡 What's YOLO? "You Only Look Once" is a computer vision model. We're using the Nano version (only 1.4M parameters!) which is fast and lightweight—perfect for beginners.
🔑 Key parameters explained:
epochs=10- The AI will see all 4,000 images 10 timesimgsz=224- Standardize all images to 224×224 pixelsbatch=32- Process 32 images simultaneously for speed
Training takes ~5-10 minutes on a free Google Colab GPU. After training, you'll see metrics showing 98.6% accuracy!
Step 4: Build the Web Interface
Now let's make it interactive! Gradio creates a beautiful web interface with just a few lines of code.
print("--- Step 3: Launching Interactive Interface ---")
# Define the prediction function
def predict_flower(img):
"""Takes an image and returns flower predictions"""
results = model.predict(source=img, conf=0.25)
probs = results[0].probs.data.tolist() # Get probabilities
names = results[0].names # Get class names
# Return as a dictionary: {"daisy": 0.95, "rose": 0.03, ...}
return {names[i]: probs[i] for i in range(len(names))}
# Create the Gradio interface
demo = gr.Interface(
fn=predict_flower, # Function to call
inputs=gr.Image(type="pil", label="Upload Flower"),
outputs=gr.Label(num_top_classes=3, label="Top Predictions"),
title="🌸 Flower Recognition AI",
description="Trained on Daisy, Dandelion, Rose, Sunflower, and Tulip."
)
# Launch it with a public URL anyone can access
demo.launch(share=True)💡 How it works: When someone uploads an image, Gradio calls predict_flower(), which runs the image through our trained model. The model outputs probabilities for each flower type (e.g., "85% sure it's a rose, 10% tulip, 5% other"). Gradio displays the top 3 predictions!
🚀 Deployment tip: The share=True parameter creates a temporary public link. For permanent hosting, deploy to Hugging Face Spaces (free!) like I did with the demo above.
📊 Results & Performance
Out of 800 test images, the model correctly identified 789 flowers
Processes images in milliseconds thanks to YOLOv8 Nano's efficiency
The model performs best on clear, centered images with good lighting. It occasionally confuses white daisies with white roses, but overall accuracy is excellent for such a lightweight model!
🎯 Next Steps & Challenges
Congratulations! You've just built your first AI model. Here are some ways to level up:
🌟 Beginner Challenges
- Add more flower types to the dataset
- Experiment with different
epochsvalues - Try YOLOv8s (small) instead of Nano for better accuracy
- Customize the Gradio interface colors and layout
🔥 Advanced Challenges
- Implement data augmentation (rotation, flipping)
- Add confidence thresholds and error handling
- Deploy to a custom domain with Docker
- Build a mobile app using TensorFlow Lite
Final Thoughts
You've just gone from zero to a deployed AI model in under 100 lines of code. That's the beauty of modern machine learning—powerful tools are now accessible to everyone, not just researchers.
The same principles you learned here apply to recognizing faces, detecting diseases in X-rays, or identifying defects in manufacturing. The possibilities are endless once you understand the fundamentals.
💬 Questions or built something cool? I'd love to hear about it! Check out the GitHub repo or try the Colab notebook linked above.
Try It Live!
Upload any flower image above to see the AI in action. Works best with daisies, dandelions, roses, sunflowers, and tulips!