Over-reliance on Computational Tools
The study heavily relies on computational tools like ESM, AlphaFold-Multimer, and Rosetta, which have inherent limitations and potential biases. While the Virtual Lab agents attempted to address these limitations, relying solely on computational predictions without extensive experimental validation might lead to inaccuracies and limit the generalizability of findings.
Limited Success Rate for Newer Variants
Although the Virtual Lab designed 92 nanobodies, only two showed promising binding profiles beyond the Wuhan strain of SARS-CoV-2. This suggests that the workflow's success rate in generating effective nanobodies for newer variants might be limited, requiring further optimization and refinement.
LLM Data Cutoff Limitations
The Virtual Lab's reliance on pre-trained Large Language Models (LLMs) introduces limitations related to the models' training data cutoff. The agents might not be aware of the most recent scientific literature and code, potentially overlooking newer tools and approaches.
Dependence on Prompt Engineering
The study acknowledges the need for prompt engineering to guide the LLM agents toward desirable responses. This dependence on human intervention can introduce biases and may require iterative adjustments to achieve optimal results.
Limited Experimental Validation
The experimental validation of the designed nanobodies focused on ELISA binding assays. While ELISA can assess binding affinity, it doesn't provide comprehensive information about the nanobodies' neutralizing capabilities or their efficacy in a biological context.