First, run the development server:
npm run dev
# or
yarn dev
# or
pnpm dev
Open http://localhost:3000 with your browser to see the result.
You can start editing the page by modifying app/page.js
. The page auto-updates as you edit the file.
This project uses next/font
to automatically optimize and load Inter, a custom Google Font.
To learn more about Next.js, take a look at the following resources:
- Next.js Documentation - learn about Next.js features and API.
- Learn Next.js - an interactive Next.js tutorial.
You can check out the Next.js GitHub repository - your feedback and contributions are welcome!
The easiest way to deploy your Next.js app is to use the Vercel Platform from the creators of Next.js.
Check out our Next.js deployment documentation for more details.
- Introduction
- Project Overview
- Technology Stack
- Team and Responsibilities
- Development Timeline
- Computer Vision Repositories
This project is focused on document scanning and health data analysis using computer vision and machine learning. It combines various technologies to provide an innovative solution for document recognition, health data prediction, and community matching. This README provides an overview of the project and its components.
The project comprises both frontend and backend components:
-
Frontend:
- Sign-up / Sign-in Page
- Home Page
- Data Display Page
- Community Page
- Integration of JavaScript Mobile Document Scanner
- Integration of user data storage
-
Backend:
- Processing text from images and associating key-value pairs
- Utilizes Django as the web framework
- Setting up vector databases and matching
- Creating a method of communication within a group/community
- Data communication between frontend and backend
- Neural Network models for prediction
The core process involves taking pictures of paper documents, extracting key information using computer vision, user questionnaires processed through LLMs, and matching users to unique communities based on health data and other factors.
The project utilizes a diverse technology stack:
-
File Managment
- Github
- Git
- Git Reposetory
-
Frontend:
- Next.js
- JavaScript
- React
- Tailwind CSS
-
Backend:
- Firebase
- PineconeDB
-
Computer Vision:
- TensorFlow
- Tesseract
-
Presentation:
- Canva
- Figma
- Google Slides
- John: Figma Mockup, Frontend Application of Mobile Site, Pitch Deck
- Siya: Predictive Models, Computer Vision, Frontend, Pitch Deck
- Ben: Computer Vision, Predictive Models, Vector Database, Github & Git Reposetory Setup
- Auston: Backend (API), Computer Vision, Messaging Integration
Here's an overview of the project development timeline:
- Integration and pitch deck: 6 hours
- Figma design: 2.5 hours
- Frontend development: 12 hours
- Computer vision MVP: 12 hours
- Predictive model MVP: 8 hours
- Community matching via vector database: 6 hours