MODULE 3 QUIZ ⏱️ 30-40 minutes 📝 20 questions 🎯 Pass: 70%

Autonomous AI Agents & Multi-Agent Systems

Test your mastery of ReAct patterns, LangGraph workflows, multi-agent coordination, memory systems, and production deployment.

Q1
What is the primary difference between a chatbot and an autonomous AI agent?
Conceptual
✓ Correct Answer: B

The key distinction is autonomy and tool use. Agents can reason about what actions to take, invoke external tools (search, APIs, calculators), and iteratively work toward goals. Chatbots primarily generate conversational text without external interactions.

Q2
In the ReAct pattern, what does "ReAct" stand for?
Definition
✓ Correct Answer: B

ReAct = Reasoning + Acting. The pattern alternates between the agent thinking about what to do (reasoning) and actually doing it (acting with tools). This creates a loop: Thought → Action → Observation → Thought → ...

Q3
What is the correct order of steps in a ReAct agent's execution loop?
Process
✓ Correct Answer: B

The ReAct loop follows: Thought (agent reasons) → Action (agent uses tool) → Observation (agent sees tool output) → repeat until the agent has enough information to provide a Final Answer.

Q4
What gives an AI agent "autonomy"?
Conceptual
✓ Correct Answer: A

Autonomy comes from the agent's ability to make decisions about which actions to take, which tools to invoke, and when to stop. The LLM acts as the "reasoning engine" that determines the next step based on observations.

Q5
Which framework is best suited for building cyclic, stateful workflows with conditional branching?
Framework Selection
✓ Correct Answer: B

LangGraph is specifically designed for building complex, cyclic workflows with nodes, edges, and conditional logic. It excels at scenarios where you need loops, quality checks, and state management across multiple steps.

Q6
What is CrewAI's primary design philosophy?
Framework Concepts
✓ Correct Answer: B

CrewAI focuses on role-based collaboration. You define agents with specific roles, goals, and backstories, then assign them tasks. The framework handles orchestration using sequential or hierarchical processes.

Q7
What distinguishes AutoGPT from LangChain-based agents?
Framework Comparison
✓ Correct Answer: A

AutoGPT is designed for fully autonomous operation with long-term goals, persistent memory, and the ability to spawn sub-agents. LangChain agents are typically more focused and require tighter orchestration.

Q8
When selecting tools for an agent, what is the most important consideration?
Best Practice
✓ Correct Answer: B

The LLM selects tools based on their descriptions. Clear, concise, and unambiguous descriptions are critical. Poor descriptions lead to incorrect tool selection and agent failures.

Q9
In a multi-agent system, what is the most common pattern for agent communication?
Architecture
✓ Correct Answer: B

Agents typically communicate through shared state (LangGraph) or message passing coordinated by an orchestrator (CrewAI). Each agent's output becomes input for the next agent or is added to shared context.

Q10
Which coordination strategy is best for a research project requiring multiple specialized roles?
System Design
✓ Correct Answer: A (or C)

Sequential coordination works best when tasks have clear dependencies (research must complete before writing). Hierarchical is also valid if complexity requires a manager agent to coordinate. Both answers would be acceptable in practice.

Q11
What is the primary benefit of agent specialization in multi-agent systems?
Benefits
✓ Correct Answer: B

Specialization allows each agent to excel at its specific role. A research agent has search tools and is prompted for thoroughness. A writer agent focuses on clarity and structure. This division of labor improves overall quality.

Q12
What is task decomposition in the context of multi-agent systems?
Definition
✓ Correct Answer: A

Task decomposition is the process of breaking down a large, complex goal (e.g., "write a research report") into manageable sub-tasks (research, outline, write, edit, fact-check) that can be tackled by specialized agents.

Q13
Which memory type persists across application restarts?
Technical - Code
# Option A
memory = ConversationBufferMemory()

# Option B
memory = ConversationSummaryMemory(llm=llm)

# Option C
memory = Chroma(
    embedding_function=embeddings,
    persist_directory="./memory"
)

# Option D
memory = {"history": []}
✓ Correct Answer: C

Chroma with persist_directory writes embeddings to disk, allowing memories to survive application restarts. Buffer and summary memory are in-memory only and lost on restart. Dictionaries are also ephemeral.

Q14
In LangGraph, what does the state schema define?
LangGraph Concepts
✓ Correct Answer: A

The state schema (typically a TypedDict) defines the structure of shared data that flows through the graph. Each node can read from and write to this state, enabling coordination and information sharing.

Q15
Why is conversation summarization important for long-running agents?
Memory Management
✓ Correct Answer: B

Summarization compresses conversation history into key points, preventing the context window from filling up and drastically reducing token costs for long interactions. Without it, agents would exceed context limits or become prohibitively expensive.

Q16
What is the primary advantage of vector-based long-term memory?
Memory Systems
✓ Correct Answer: B

Vector memory uses semantic search via embeddings. You can query "agent frameworks" and retrieve memories about "LangGraph and CrewAI" even if those exact words weren't used. This is far more powerful than keyword matching.

Q17
What does this FastAPI endpoint do?
Technical - Code
@app.post("/agent/stream")
async def query_agent_stream(request: AgentRequest):
    return StreamingResponse(
        stream_agent_response(request.query),
        media_type="text/event-stream"
    )
✓ Correct Answer: B

This endpoint uses Server-Sent Events (SSE) to stream the agent's output in real-time. The client receives updates as the agent thinks and acts, providing a responsive user experience.

Q18
What is the best practice for handling agent tool failures?
Production Best Practice
✓ Correct Answer: B

The agent should see tool errors so it can reason about alternatives. If a search fails, the agent might try a different query or use a different tool. Use handle_parsing_errors=True in LangChain to enable this.

Q19
Which technique is most effective for reducing agent API costs?
Cost Optimization
# Option A
llm = ChatOpenAI(model="gpt-4", temperature=0)

# Option B
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)

# Option C
agent.max_iterations = 100

# Option D
memory = ConversationBufferMemory()
✓ Correct Answer: B

Model routing (using GPT-3.5-turbo for simple tasks and GPT-4 for complex ones) can reduce costs by 10x. Other techniques: limit max_iterations, cache responses, use summarization, and compress prompts.

Q20
What is the most important consideration when deploying agents to production?
Production Deployment
✓ Correct Answer: B

Production agents need robust error handling: timeouts prevent infinite loops, rate limiting prevents abuse, monitoring tracks performance and costs, and comprehensive logging aids debugging. Never deploy agents without these safeguards.

Quiz Results

0%
Back to Module 3