In this tutorial, we walk through how to build a small but powerful binary proxy CrewAI A system that collaborates using the Gemini Flash model. We configure our environment, authenticate securely, identify specialized agents, and coordinate the tasks that flow from research to structured writing. As the crew runs, we observe how each component works together in real-time, giving us a practical understanding of modern agent workflows powered by LLMs. Through these steps, we clearly see how multi-agent pipelines become practical, standard, and developer-friendly. verify Full codes here.
import os
import sys
import getpass
from textwrap import dedent
print("Installing CrewAI and tools... (this may take 1-2 mins)")
!pip install -q crewai crewai-tools
from crewai import Agent, Task, Crew, Process, LLM
We set up our environment and installed the required CrewAI packages so we can run everything smoothly in Colab. We import the necessary modules and lay the foundation for our multi-agent workflow. This step ensures that our runtime is clean and ready for the agents we create next. verify Full codes here.
print("n--- API Authentication ---")
api_key = None
try:
from google.colab import userdata
api_key = userdata.get('GEMINI_API_KEY')
print("✅ Found GEMINI_API_KEY in Colab Secrets.")
except Exception:
pass
if not api_key:
print("ℹ️ Key not found in Secrets.")
api_key = getpass.getpass("🔑 Enter your Google Gemini API Key: ")
os.environ("GEMINI_API_KEY") = api_key
if not api_key:
sys.exit("❌ Error: No API Key provided. Please restart and enter a key.")
We securely authenticate ourselves by retrieving or entering your Gemini API key. We ensure that the key is stored securely in the environment so that the model can work without interruption. This step gives us confidence that our proxy framework can reliably communicate with LLM. verify Full codes here.
gemini_flash = LLM(
model="gemini/gemini-2.0-flash",
temperature=0.7
)
We create the Gemini Flash model on which our agents think and create. We choose the temperature and model parameter to achieve a balance between creativity and accuracy. This configuration becomes the shared intelligence that drives all of the agent’s tasks forward. verify Full codes here.
researcher = Agent(
role="Tech Researcher",
goal="Uncover cutting-edge developments in AI Agents",
backstory=dedent("""You are a veteran tech analyst with a knack for finding emerging trends before they become mainstream. You specialize in Autonomous AI Agents and Large Language Models."""),
verbose=True,
allow_delegation=False,
llm=gemini_flash
)
writer = Agent(
role="Technical Writer",
goal="Write a concise, engaging blog post about the researcher"s findings',
backstory=dedent("""You transform complex technical concepts into compelling narratives. You write for a developer audience who wants practical insights without fluff."""),
verbose=True,
allow_delegation=False,
llm=gemini_flash
)
We identify two specialized agents, a researcher and a writer, each with a clear role and backstory. We design them so that they complement each other, allowing one to discover ideas while the other turns them into polished writing. Here, we begin to see how cooperation between agents takes shape. verify Full codes here.
research_task = Task(
description=dedent("""Conduct a simulated research analysis on 'The Future of Agentic AI in 2025'. Identify three key trends: 1. Multi-Agent Orchestration 2. Neuro-symbolic AI 3. On-device Agent execution Provide a summary for each based on your 'expert knowledge'."""),
expected_output="A structured list of 3 key AI trends with brief descriptions.",
agent=researcher
)
write_task = Task(
description=dedent("""Using the researcher's findings, write a short blog post (approx 200 words). The post should have: - A catchy title - An intro - The three bullet points - A conclusion on why developers should care."""),
expected_output="A markdown-formatted blog post.",
agent=writer,
context=(research_task)
)
We create two tasks that define specific responsibilities for our agents. We allow the researcher to generate structured insights and then pass the output to the writer to create a full blog post. This step demonstrates how we cleanly orchestrate sequential task dependencies within CrewAI. verify Full codes here.
tech_crew = Crew(
agents=(researcher, writer),
tasks=(research_task, write_task),
process=Process.sequential,
verbose=True
)
print("n--- 🤖 Starting the Crew ---")
result = tech_crew.kickoff()
from IPython.display import Markdown
print("nn########################")
print("## FINAL OUTPUT ##")
print("########################n")
display(Markdown(str(result)))
We group agents and tasks into a crew and run the entire multi-agent workflow. We watch how the system is implemented step by step, producing the final markdown output. This is where everything comes together, and we see our agents collaborating in real time.
In conclusion, we appreciate how seamlessly CrewAI allows us to create coordinated agent systems that think, search, and write together. We experience first-hand how defining roles, tasks, and process flows allows us to break down complex work into modules and achieve cohesive deliverables with minimal code. This framework enables us to build richer, more autonomous agent applications, and we are confident of extending this foundation to larger multi-agent systems, production pipelines, or more creative AI collaborations.
verify Full codes here. Feel free to check out our website GitHub page for tutorials, codes, and notebooks. Also, feel free to follow us on twitter Don’t forget to join us 100k+ mil SubReddit And subscribe to Our newsletter. I am waiting! Are you on telegram? Now you can join us on Telegram too.
Asif Razzaq is the CEO of Marktechpost Media Inc. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of AI for social good. His most recent endeavor is the launch of the AI media platform, Marktechpost, which features in-depth coverage of machine learning and deep learning news that is technically sound and easy to understand by a broad audience. The platform has more than 2 million views per month, which shows its popularity among the masses.







