About

Skills
  • Programming Languages
    • Python, SQL, Bash, R, C, YAML, Jinja2, JavaScript/TypeScript, HTML, CSS, Visual Basic, MATLAB
  • Data Analysis Tools
    • NumPy, Pandas, Matplotlib, Seaborn, Plotly, Tableau, ArcGIS, Excel
  • Machine Learning Frameworks
    • Scikit-learn, PyTorch, TensorFlow, Keras, LangChain, OpenAI API (LLMs & embeddings), Vector Databases (Chroma), Prompt Engineering, Retrieval-Augmented Generation (RAG)
  • Data Engineering Tools
    • PySpark, Databricks, Delta Lake, dbt, Kafka, Snowflake, PostgreSQL, MongoDB, GraphQL, Hadoop, Apache Airflow, FastAPI microservices, Data Modeling (Bronze → Silver → Gold), Semantic Layer Design
  • Cloud, CRM, & Version Control
    • GCP (GCS, Dataproc, IAM), AWS (S3, EMR), Azure, Docker, Terraform (IaC), Git, GitHub Actions (CI/CD), REST APIs, React/TypeScript (Vite), Full-stack Integration (API + Vector DB + LLM + UI), Salesforce, SharePoint, LaTeX, Apricot
  • Soft Skills
    • Problem solving, detailed, team-oriented, business, scientific, and technical communication, ethical critical thinking, adaptable, leadership, conflict resolution

I’m Dylan Picart, a data scientist and engineer focused on building scalable, ethical, and human-centered AI systems for complex socioeconomic, educational, and environmental challenges. I blend rigorous technical engineering with a strong foundation in equity, ethics, and community impact.

My journey began as a Data Science Innovation Fellow at The Knowledge House, where I completed 400+ hours of intensive, project-based training in Python, SQL, data engineering, and machine learning. Combined with my B.S. in Physics and minor in Philosophy from Adelphi University, I developed a unique analytical approach grounded in scientific thinking, ethics, and systems design.

Professionally, I’ve worked across data science, analytics engineering, and full-stack data platform development, contributing to multidisciplinary teams and mission-driven organizations. My experience spans:

  • Designing hybrid batch + streaming pipelines using Databricks, Kafka/Spark, Dataproc, GCS, and dbt
  • Building AI-powered microservices using LangChain, FastAPI, Chroma, and OpenAI models
  • Developing natural-language RAG systems that generate grounded insights from Snowflake Gold data
  • Deploying machine learning models for NLP, sentiment analysis, and image caption evaluation
  • Automating ETL workflows with Python, Bash, JavaScript, and Airflow
  • Managing complex dataset migrations and improving data quality, accessibility, and trust

At the core of my work is a commitment to ethical AI, transparent modeling, and leveraging technology to empower communities, not just automate systems. I’m especially passionate about education equity, climate justice, and how data can illuminate structural barriers.

Outside of engineering, you’ll find me running long distances, scuba diving, surfing, swimming, or practicing mindfulness. I’m always exploring unconventional learning pathways and curious, interdisciplinary conversations.

If you’d like to collaborate, brainstorm, or connect over data, AI, or community-focused innovation, feel free to reach out — I’d love to chat.