Welcome to JustinBot
JustinBot is your personal guide to everything I do. Whether you’re a recruiter, or just curious, JustinBot will answer questions about my projects, research, and everything about me!
Simply type a question and watch JustinBot craft a clear, conversational response.
Start the Conversation

Example questions:
• Where did Justin do his summer internships?
• What does Justin study at school?
• Explain Justin's work experiences
How JustinBot Works
I wanted a fun, interactive way to connect with people, so I trained my own AI to answer questions about myself - JustinBot. It works by securely linking to the OpenAI API with my private key, all hosted on Render’s on-demand server platform. I gradually fed it a diverse set of materials: scholarship applications, up-to-date resumes, project write-ups, miscellaneous documents, and even a giant FAQ I assembled, until it knew all about me. Through iterative prompt-engineering and fine-tuning, I polished JustinBot until it could answer any question exactly as I would. To keep JustinBot updated, every time I land a new job, finish a project, or rack up another accomplishment, I update the server code and refresh my training data—so JustinBot always knows the very latest about me. Below are the exact steps that power JustinBot:
-
You Ask a Question (Client Side)
What happens: When you type into the chat box and hit “Submit,” your question is encrypted in your browser and sent over the internet.
Analogy: Imagine placing your question in a sealed envelope and dropping it in a secure postbox.
-
Secure Gatekeeper (Authentication & Encryption)
What happens: Before leaving your browser, your question is wrapped in an extra layer of security (OAuth tokens & HTTPS).
Why it matters: Ensures only JustinBot (and no eavesdroppers) can read it.
-
Node.js Middleman (The “Post Office”)
What happens: A lightweight Node.js server picks up your sealed envelope, checks the packaging, and prepares it for the AI.
Key jobs:
- Validation: Confirms the request is truly coming from your site.
- Error handling: Sends friendly errors back if something’s malformed.
Analogy: Think of this as the sorting facility that routes mail and makes sure it’s stamped correctly.
-
OpenAI Assistant API (The “Brain”)
What happens: Your question arrives at a giant neural network trained on books, articles, and code.
Customization: I’ve supplied a “system prompt” containing my résumé, project notes, and writing style, plus extra prompt-engineering rules (e.g. “Cite sources,” “Keep replies under 100 words”).
-
Answer Delivery (Back to You)
What happens: The AI crafts its reply, sends it back to my Node.js server, which unwraps the secure package and forwards the answer to your browser.
Privacy: End-to-end encryption means no intermediate service can read your question or the answer.
-
Render Hosting (Cold Starts & Scaling)
How it runs: I host on Render, which spins up containers on demand.
Cold start: If idle, the first request takes ~30 s (“JustinBot is sleeping... 💤”).
Warm instances: Subsequent replies return in under 1 s.
Why Render: Pay-as-you-go pricing, automatic SSL, GitHub integration for CI/CD.
-
Frontend Chat Interface (What You See)
Tech stack: Pure HTML, CSS and vanilla JavaScript.
UX tricks:
- AJAX updates – new messages appear instantly without reloading.
- Responsive – adapts to phones and tablets.
- Accessible – ARIA labels and full keyboard navigation.
Why This Matters
- Backend Engineering: Secure request handling & error recovery.
- Cloud DevOps: On-demand hosting & cost control.
- AI/ML Integration: Fine-tuned models & prompt engineering.
- UX & Accessibility: Snappy, inclusive chat UI.