
Announcing the Bio x AI Hackathon Winners!
The next leap in scientific discovery won’t come from a single genius, but from a new kind of system: one built by a global community of researchers, engineers and scientific AI agents.
During Bio's 2-month-long "Bio x AI" Virtual Hackathon, over 60 teams built scientific agents and agentic science frameworks, including
These projects went beyond code. They delivered visions of what science can be when it’s open, decentralized, and powered by intelligence that never sleeps:
• Winner Announcement thread on X
• Bio x AI Project Submissions
• Video livestream showcasing winning projects
Main Prizes ($40,000 Total)
$10,000 – SideEffectNet
Github and project submission
SideEffectNet is an intelligent scientific assistant designed to analyze drug safety and visualize drug-side effect relationships. It provides tools for risk analysis, hypothesis generation, and data visualization, making it a powerful platform for pharmacological research and decision-making.
Why it Matters
As pharmacological data grows more complex, it’s becoming harder for healthcare professionals and researchers to make sense of drug-side effect relationships, evaluate potential risks, and pinpoint safer alternatives. This project offers a clear, intuitive platform for visualizing and analyzing drug safety data, empowering users to make smarter, evidence-based decisions in both clinical practice and scientific research.
Key Features
- Drug Risk Analysis: Analyze risk scores and side effects for individual drugs.
- Drug Interaction Analysis: Identify shared side effects between drug combinations.
- Risk Visualization: Generate bar charts and network graphs to visualize drug risk scores and relationships.
- Polypharmacy Risk Detection: Detect potential risks when combining multiple medications.
- Hypothesis Generation: Generate scientifically validated hypotheses for drug combinations.
$10,000 – QuantumParse
Github and Project Submission
QuantumParse is a working Eliza AI agent enhanced with a custom Gaussian Knowledge Graph plugin for computational chemistry analysis.
It demonstrates natural language interaction with quantum chemistry data through automatic file processing and semantic knowledge graphs. This enables researchers to interact with complex data more naturally: asking questions, exploring patterns, and extracting key information without manual intervention.
Some of its capabilities include:
- Automated Monitoring: Continuously watches the example_logs/ directory for new Gaussian files and processes them automatically.
- Knowledge Graph Generation: Translates Gaussian output into structured RDF triples for semantic analysis.
- Natural Language Querying: Allows users to ask intuitive questions like “How many molecules?” or “Show me SCF energies.”
- Live Data Analysis: Delivers real-time statistical summaries and insights from the processed data.
- Chemistry-Aware Intelligence: Interprets and responds accurately using domain-specific computational chemistry terms.
Why it Matters
Despite major advances in computational chemistry, many workflows remain constrained by outdated practices. Researchers often rely on large, unstructured log files, manually extracting values, exchanging data over email, and writing fragile parsing scripts tailored to each study.
These processes are not only time-consuming, but highly error-prone; a single formatting issue can disrupt an entire analysis. Basic retrospective questions become unreasonably difficult without structured, searchable data.
QuantumParse tackles this challenge by turning raw quantum-chemistry calculations into clear, searchable knowledge: quicker, safer, and with zero manual copy-paste.

Key Features
- File Monitoring: Real-time detection of new Gaussian files
- Python Integration: Uses cclib for parsing Gaussian output
- RDF Generation: Creates semantic knowledge graphs in Turtle format
- Statistical Analysis: Provides counts and summaries of molecular data
- Natural Language Interface: Query using chemistry terms and plain English
$10,000 – BiomeAI
Github and Project Submission
BiomeAI is an intelligent Discord bot designed to turn complex microbiome PDF reports into clear, personalized health insights. It uses a step-by-step conversational workflow, guiding users through their results, helping them understand key findings and potential health implications.
The system leverages Retrieval-Augmented Generation (RAG) and vector embeddings to accurately interpret report content, maintain context, and deliver tailored recommendations through a natural, interactive experience.
Why it Matters
Microbiome testing is growing in popularity, but interpreting the results remains a major challenge. When someone receives a microbiome report, they’re often handed a 50+ page PDF packed with bacterial names, relative abundance percentages, and technical metrics that require a background in microbiology or bioinformatics to understand.
For most people, that creates several problems:
- Overwhelming complexity: The reports are dense and filled with jargon. Users are expected to interpret raw data without any clinical or personal context.
- Lack of personalization: Even when recommendations are included, they don’t take into account your diet, symptoms, lifestyle, or health goals, which makes it hard to apply the findings in real life.
- Information overload without direction: The volume of data is high, but clarity is low.
- Disconnected from symptoms: Reports rarely connect gut microbiome patterns to day-to-day experiences like bloating, fatigue, or digestion issues; leaving a critical gap between data and lived health concerns.
- No follow-up support: Once the report is delivered, that’s typically the end of the process. If you have questions or want to explore specific areas, you're on your own.
BiomeAI’s Discord-based AI agent transforms microbiome PDF reports into clear, personalized health insights through a step-by-step conversation
Key Features
- PDF Processing: Extracts and analyzes microbiome report data from PDF uploads
- Structured Conversation Flow: Guides users through a specific sequence of questions and predictions
- Vector Search: Uses pgvector for semantic similarity search across report content
- Cost Tracking: Monitors OpenAI API usage and costs for each interaction
- Thread Management: Creates dedicated Discord threads for each report analysis
- Automated Follow-ups: Sends actionable insights and Q&A prompts automatically
$10,000 – Holy Bio MCP
GitHub and Project Submission
Holy Bio MCP is a server for the SynergyAge, an experimentally validated database on genetic interventions affecting lifespan across multiple model organisms.
This server implements the Model Context Protocol (MCP) for SynergyAge, providing a standardized interface for accessing synergistic genetic intervention and aging research data. MCP enables AI assistants and agents to query comprehensive longevity datasets through structured interfaces.
The server automatically downloads the latest SynergyAge database and documentation from Hugging Face Hub, ensuring you always have access to the most up-to-date data without manual file management.
Why it Matters
Modern biology research faces a major challenge: an overwhelming ecosystem of specialized tools, databases, and resources. Researchers often need to juggle 10 to 15 different platforms to piece together insights for a single project. The fragmented landscape forces scientists into repetitive, manual data transfers, copying and reformatting information just to keep workflows moving.
This manual wrangling consumes an estimated 60-80% of researchers’ time, creating a bottleneck that slows down progress and saps creative energy. Worse, each handoff between tools risks errors: typos, missing data, or incompatible formats can derail days or weeks of work.
The Longevity Genie team built Holy Bio MCP to radically simplify this process. Instead of navigating a maze of disconnected resources, researchers can now harness AI-powered conversational workflows to execute complex multi-step queries with a single command.
Key Features
- Comprehensive Search: Text search supporting multiple genes (comma or semicolon separated)
- Categorized Results: Search results grouped into four categories:
- Mutants with interventions in all searched genes
- Single-gene mutants (1-mutants)
- Multi-gene mutants with subset interventions (n-mutants)
- Extended combinations including additional genes
- Interactive Network Visualization: Visual exploration of mutant relationships with nodes representing lifespan models and edges showing genetic intervention relationships
- Epistasis Analysis: Assessment of genetic interactions between modulated genes
- External Integration: Links to KEGG pathways and model-specific databases for enhanced context
Secondary Prizes ($15,000 Total)
$5,000 – Neural Nexus
GitHub and Project Submission
Neural Nexus is an AI-powered platform transforming neurological drug discovery by accelerating breakthrough treatments through advanced molecular analysis, knowledge graph exploration, and AI-driven hypothesis generation. It integrates state-of-the-art AI with biomedical research, dramatically shortening drug development timelines (from decades to just years) while improving success rates through smarter target identification and efficient virtual screening.
Their solution empowers researchers with a comprehensive suite of tools designed to streamline neurological drug discovery and biomedical research:
- Accelerated Drug Discovery: Rapidly screen and optimize compounds with AI-driven predictions, cutting development time from years to months.
- Protein Structure Analysis: Visualize and analyze detailed 3D protein structures using AlphaFold and ESMFold. Identify binding sites interactively and assess druggability, supported by molecular docking simulations to predict drug-target interactions.
- Knowledge Graph Exploration: Navigate complex biomedical relationships with an interactive interface that connects proteins, drugs, diseases, and pathways in real time, facilitating hypothesis generation and validation through graph-based reasoning.
- AI-Powered Hypothesis Generation: Leverage Eliza AI agents to automatically generate and validate scientific hypotheses.
- Enhanced Research Efficiency: Automate data integration from multiple biomedical databases, enable real-time molecular visualization, and foster collaboration with shared knowledge graphs.
Why it Matters
NeuralNexus tackles the complexities and delays of neurological drug discovery by combining advanced AI, knowledge graphs, and molecular modeling into a single streamlined platform.
Traditional methods are slow, expensive, and often unsuccessful, largely because of the challenges posed by neurological diseases and their complex biological interactions. The project turns the industry’s major challenges into opportunities with AI-powered solutions:
- 90% drug failure rate: Intelligent pre-screening and validation.
- 10-15 year development cycles: Accelerated discovery pipelines.
- $2.6B average drug cost: Computational-first approach.
- Limited rare disease research: Democratized discovery tools.

Key Features
- Drug Discovery Suite
- AI-Powered Compound Screening: Screen millions of molecules in minutes.
- ADMET Prediction: Absorption, Distribution, Metabolism, Excretion, Toxicity analysis.
- Lead Optimization: Molecular property enhancement and drug-likeness scoring.
- Target Validation: Protein-drug interaction prediction and binding affinity.
- Protein Analysis Lab
- 3D Structure Prediction: AlphaFold, ESMFold, and ColabFold integration.
- Binding Site Identification: Druggable pocket detection and characterization.
- Molecular Docking: Virtual screening and pose prediction.
- Function Prediction: GO annotation and pathway analysis.
- Knowledge Graph Explorer
- Interactive Biomedical Networks: 2.5M+ nodes, 15M+ relationships.
- Pathway Analysis: Disease-protein-drug connection discovery.
- Literature Mining: Automated paper analysis and knowledge extraction.
- Real-time Updates: Continuous integration of new research data.
- AI Hypothesis Engine
- Novel Hypothesis Generation: ML-driven scientific theory creation.
- Evidence Validation: Literature-backed hypothesis scoring.
- Experimental Design: Automated protocol generation.
- Testability Assessment: Feasibility and resource estimation.
$5,000 – ValleyDAO CLI Biohack
GitHub
ValleyDAO CLI is a comprehensive command-line tool designed to help researchers and entrepreneurs manage Biology and Climate Biotechnology projects directly from their terminal. The platform streamlines deep research processes and business model development by identifying target markets, analyzing market segments, generating business models, and conducting customer research.
The project is equipped with a Technology Development module integrating a powerful suite of AI models, including GPT-4.1, o1, and Perplexity’s Sonar Pro deep research model. These technologies work in harmony to deliver comprehensive research assistance that adapts to your project’s unique needs.
Complementing the technology side, the Business Development module leverages advanced large language models and API integrations to assist entrepreneurs and founders. This module offers systematic business planning support, guiding users through the essential steps needed to build sustainable and scalable ventures.
Why It Matters
Despite humanity’s drive for innovation, the methods we use to achieve progress (especially in deep tech) remain frustratingly inefficient. While science has made incredible strides, the overall input-output efficiency of innovation still falls far short of its potential.
Deep tech startups face brutal odds: studies suggest only about 5% survive from Seed to Series A, with even fewer progressing beyond. Why is it so difficult for highly skilled teams to build successful companies in this space?
ValleyDAO CLI offers a powerful solution by bringing complex research and business planning workflows into a single, easy-to-use command-line interface. It empowers researchers and entrepreneurs to efficiently navigate both the scientific and commercial aspects of Biology and Climate Biotechnology projects, reducing friction and accelerating progress from discovery to market-ready innovation.
Key Features
- Conducts interactive sessions to understand your project requirements.
- Assesses current project status and specific needs.
- Provides personalized guidance based on project context.
- Generates detailed research roadmaps with actionable steps.
- Creates targeted research queries for critical project aspects.
- Conducts automated research to answer key variables (e.g., optimal enzymes, temperature conditions, etc.).
- Delivers comprehensive final reports that serve as research roadmaps.
$5,000 – Protein Bank MCP
The PDB-MCP Server (Protein Data Bank – Model Context Provider) is an open-source microservice that gives AI agents seamless, standardized access to the RCSB Protein Data Bank, the world’s largest archive of 3D protein structures. Instead of relying on bulk downloads or fragile web scraping, BioML systems can now retrieve high-quality protein structure data through a lightweight FastAPI service fully compliant with Anthropic’s Model Context Protocol (MCP) specification.
Designed for decentralized use via Docker or CLI, the PDB-MCP Server allows researchers, educators, and biotech startups to host their own mirrors, free from centralized control. It’s optimized for integration with MCP-aware large language models, enabling powerful, reproducible protein structure analysis in AI workflows.
PDB-MCP helps level the global playing field for AI-driven biological research and fosters open, collaborative innovation across institutions and borders.
Why It Matters
Biological Machine Learning (BioML) models depend heavily on access to high-quality, structured 3D protein data, but today’s methods for obtaining that data are inefficient and restrictive. Traditional access to the Protein Data Bank (PDB) often involves downloading large files or scraping web interfaces, both of which are slow, rate-limited, and ill-suited for the rapid iteration cycles needed in advanced fields.
The PDB-MCP Server solves this by delivering compact, queryable context bundles (including key metadata like title, method, resolution, ligands, and protein chains) through a lightweight, standards-based API. This enables BioML agents and reinforcement learning systems to access just the information they need, when they need it, without bulky file handling or throttling.
Key Features
- Open – MIT-licensed code and CC-BY-4.0 data for unrestricted use.
- Inclusive – Runs anywhere: from local laptops to full-scale Kubernetes clusters.
- Composable – Easily integrates into modular AI ecosystems.
- Reproducible – All queries are traceable with built-in provenance tracking.
Solana Prizes ($15,000 Total)
Awarded to projects that made unique contributions to the hackathon ecosystem. This includes exceptional documentation, community engagement, open-source design patterns, and more.
$5,000 – Tania
GitHub and project submission
Tania is an innovative AI-driven system designed to tackle a critical challenge in scientific research: identifying likely null or inconclusive results that often remain unpublished or obscured in the literature. It combines advanced Natural Language Processing (NLP) techniques with sophisticated knowledge graph analysis to scan scientific manuscripts and research data and detect subtle linguistic and contextual clues indicative of failed replication attempts, negative outcomes, contradictory findings, or experimental limitations.
The system assigns confidence scores to flag potential null results. It delivers clear justifications, interactive visualizations, and prioritized lists of hypotheses, helping researchers uncover hidden insights and reduce publication bias. Tania enhances transparency and efficiency in scientific discovery by revealing results that might otherwise go unnoticed.

Why It Matters
Scientific research faces a significant challenge known as null result bias: when experiments that fail to support their original hypotheses go unpublished.
This leads to wasted time, resources, and effort as other researchers unknowingly repeat unsuccessful studies. It also skews the scientific record by highlighting only positive findings, leaving gaps in our understanding.
Tania addresses this problem by acting as a smart research assistant that uses natural language processing and knowledge graphs to detect hidden null results within published papers. The system helps researchers, meta-analysts, and funders avoid redundancy, pinpoint weak areas in research, and make better-informed decisions.
Key Features
- Manuscript Analysis: Uses NLP to detect contextual clues indicating null or inconclusive results, such as failed replications, contradictory findings, and cautious language.
- Knowledge Graph Analysis: Builds and analyzes knowledge graphs to identify patterns like frequently tested but unsupported hypotheses and conflicting scientific relationships.
- Hypothesis Validation Scoring: Assigns confidence scores to hypotheses based on literature mentions, evidence strength, contextual cues, and contradictions with known facts.
- Comprehensive Output: Generates lists of likely null-result hypotheses with textual justifications, confidence scores, and visualizations of knowledge graph patterns.
- Technology Integration: Utilizes advanced NLP tools (spaCy, transformers), knowledge graph frameworks (Neo4j, RDF), machine learning for classification, and data visualization libraries (D3.js, Cytoscape).
- Evaluation Metrics: Measures accuracy against expert judgment, clarity of justifications, usefulness of confidence scores, and effectiveness of visualizations.
$5,000 – DeepIdea: AI Agents for Novel Science
GitHub and project submission
DeepIdea turns ambitious research goals into actionable scientific projects using AI agents. Designed for researchers and innovators, the platform transforms raw ideas (optionally anchored by existing papers) into validated, well-structured proposals ready for execution and funding.
The platform guides an idea through several stages, primarily driven by specialized AI agents:
- Input & Initialization: Users begin by submitting a research goal or idea, with the option to attach a supporting paper for context.
- Multi-Strategy Idea Generation: A diverse team of AI agents applies various ideation techniques to generate a wide range of novel research directions.
- Intelligent Ranking & Selection: The AI evaluates all generated ideas using internal LLM-based debates and scoring models focused on novelty, feasibility, and impact.
- Novelty Validation: Leading ideas are validated against scientific databases (e.g., via FutureHouse) to confirm they haven't been explored or published.
- Contextual Enrichment & Critical Analysis: Validated ideas undergo deeper investigation. AI agents simulate expert reviews, gather relevant literature, and engage in debate to stress-test each concept.
- Project Standardization: Once refined, the selected idea is structured into a full research plan, complete with hypothesis, abstract, experimental design, and a step-by-step guide for the first experiment.
- Timestamped Research Artifact (TRA): Users can mint a TRA NFT on the Solana blockchain: a verifiable, tamper-proof record of the idea’s development and validation.
- Bounties & Grants: The TRA can be used to launch bounties, enabling the community to vote on projects, fund them, and allocate rewards to researchers (human or AI) who carry them forward.
DeepIdea bridges inspiration and execution, making cutting-edge scientific exploration more collaborative, transparent, and fundable.
Why It Matters
Developing a strong “research taste” traditionally requires years of reading papers and mentorship from experienced scientists: resources not everyone has access to. Early-career researchers and autonomous AI agents alike often struggle to contribute meaningfully to cutting-edge science due to gaps in guidance, creativity, and validation tools.
DeepIdea addresses this by streamlining and democratizing early-stage scientific ideation. The platform empowers individuals with AI systems to turn promising scientific insights into actionable, high-impact projects; complete with tailored, step-by-step execution plans.
Key Features
- AI-Powered Idea Generation: Utilizes multiple AI agents with diverse strategies.
- Automated Ranking & Deduplication: Employs AI for intelligent selection and refinement of ideas.
- Transparent Reasoning: Users can inspect the reasoning behind each step of the process.
- External Validation: Integrates with tools like FutureHouse for novelty checks.
- Multi-Expert Simulation: AI agents take on expert roles for in-depth analysis and debate.
- Detailed Experiment Design: Generates actionable experiment plans.
- Blockchain Integration: Features Timestamped Research Artifacts (NFTs) on Solana.
- Decentralized Funding: Enables a bounty system for community-driven research support.
$5,000 – Hypothesis-to-Experiment Orchestrator (HEO)
Project submission and GitHub
The Hypothesis-to-Experiment Orchestrator (HEO) is a powerful ElizaOS plugin designed to automate and streamline the entire scientific research workflow, from hypothesis generation to experimental execution and on-chain verification.
HEO connects AI, lab automation, and decentralized infrastructure to enable fast, reproducible, and trustworthy science.
The platform integrates four key components into a seamless pipeline:
- Hypothesis Generation: Combines Google Gemini with OxiGraph-powered Retrieval-Augmented Generation (RAG) to propose data-backed scientific hypotheses.
- Cloud-Lab Execution: Automatically translates hypotheses into executable lab protocols and dispatches them to automated cloud labs like Strateos or Emerald Cloud Lab (ECL).
- Proof & Anchoring: Generates zkSNARK proofs of the experiment’s integrity and anchors them on Solana for tamper-proof validation.
- FAIR Packaging: Outputs structured, machine-readable data in FAIR-compliant JSON-LD format and stores it on IPFS for open, decentralized access.
HEO turns research ideas into verifiable experiments with minimal human intervention, boosting reproducibility, reducing bias, and accelerating the pace of scientific discovery.
Why It Matters
Modern scientific research is slow, expensive, and largely inaccessible. Forming a solid hypothesis requires sifting through vast, fragmented literature. Designing and executing lab experiments is costly (averaging $1.4K per protocol) and often yields inconclusive or irreproducible results.
Access to advanced lab infrastructure is typically restricted to elite institutions, limiting who can meaningfully contribute to innovation.
The Hypothesis-to-Experiment Orchestrator (HEO) directly tackles these bottlenecks by automating core components of the scientific workflow. It streamlines hypothesis generation, protocol design, and experimental validation, dramatically reducing time and cost.
Key Features
- Hypothesis Generation Engine: Uses Google Gemini Pro and OxiGraph with Retrieval-Augmented Generation (RAG) over OpenAlex/MEDLINE to generate 100+ novel hypotheses per hour, outputting FAIR-compliant RDF triples stored on IPFS.
- Protocol Automation Suite: Includes 8+ smart contract templates (CRISPR, ELISA, PCR) on Solana, with real-time reagent inventory checks to optimize lab resource utilization.
- zk-Validation Layer: Applies Groth16 zkSNARKs to generate cryptographic proofs for each experimental step, timestamped and anchored on the Solana blockchain for tamper-proof reproducibility.
- ElizaOS Plugin Architecture: Built as a modular plugin within the ElizaOS SDK, ensuring seamless interoperability with other scientific agents and platforms.
- FAIR Packaging & Storage: Automatically structures outputs into FAIR JSON-LD format and stores them on IPFS for open access and decentralized retrieval.
- Experiment Workflow Automation: Orchestrates the full pipeline from hypothesis to protocol execution and validation, significantly reducing time, cost, and manual labor in experimental research.
- Decentralized Knowledge Publishing: Publishes structured hypothesis data and experimental results to decentralized knowledge graphs, enabling transparent, collaborative science.
Midpoint Prizes ($25,000 Total)
Midpoint prize winners were announced at the end of May to the teams making the most progress halfway through the Hackathon. In addition to ValleyDAO CLI Biohack and Protein Bank MCP, which are already mentioned in this article, there was one more awarded team.
$10,000 – Chronos by SpineDAO
GitHub
Chronos is a modular, agentic system built by SpineDAO to digitize, decode, and integrate centuries of forgotten biomedical knowledge, specifically from historical spine surgery manuscripts and traditional Indian medicine systems like Siddha and Ayurveda. The project connects ancient clinical wisdom with modern AI-native research infrastructure by transforming unstructured, often non-digitized texts (written in classical Tamil and other languages) into a machine-readable, interoperable knowledge base.
The system uses OCR, natural language understanding, and semantic graph reasoning to unlock overlooked therapeutic strategies, diagnostics, and hypotheses; plugging them directly into research pipelines powered by ElizaOS.
Why It Matters
Modern biomedical science has a blind spot: vast volumes of historical and non-Western clinical knowledge remain undigitized, untranslated, and excluded from current evidence-based medicine. Valuable insights, rooted in generations of empirical observation, are lost not because they lack merit, but because they’ve never been made computationally accessible.
This historical loss represents a massive limitation to today’s innovation. Especially in areas like spinal care, where traditional techniques may hold answers modern medicine has yet to rediscover.
Chronos aims to correct that by:
- Turning forgotten medical texts into searchable, actionable data.
- Enabling AI agents to reason over this knowledge.
- Bringing historical memory into the frontier of autonomous biotech.
Key Features
What This Moment Means Looking Forward
The Bio x AI Hackathon was a glimpse of how science could work when it’s reimagined from the ground up.
The teams that rose to the top did more than meet requirements. They found cracks in the old system and built something new in the gaps. They turned obscure manuscripts into living knowledge graphs. They gave proteins a language. They helped scientists become founders.
None of this happens in a vacuum. It happens because a community decides to care deeply about the shape of progress, and to actually do something about it.
So if you wrote code, joined a Discord chat, shared feedback, or just watched from the sidelines: thank you. This is how the future of research gets built: piece by piece, by people who show up.
We’re just getting started.
Let’s keep pushing science forward, together.