Skip to content

evo_prot_grad

get_expert

evo_prot_grad.get_expert(expert_name: str, scoring_strategy: str, temperature: float = 1.0, model: Optional[nn.Module] = None, tokenizer: Optional[Union[ExpertTokenizer, PreTrainedTokenizerBase]] = None, device: str = 'cpu') -> Expert

Current supported expert types (to pass to argument expert_name):

- `bert`
- `causallm`
- `esm`
- `evcouplings`
- `onehot_downstream_regression`

Customize the expert by specifying the model and tokenizer. For example:

from evo_prot_grad.experts import get_expert
from transformers import AutoTokenizer, EsmForMaskedLM

expert = get_expert(
    expert_name = 'esm',
    model = EsmForMaskedLM.from_pretrained("facebook/esm2_t36_3B_UR50D"),
    tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t36_3B_UR50D"),
    scoring_strategy = 'mutant_marginal',
    temperature = 1.0,
    device = 'cuda'
)   

Parameters:

Name Type Description Default
expert_name str

Name of the expert to be used.

required
scoring_strategy str

Approach for scoring variants that the expert will use.

required
temperature float

Temperature for the expert. Defaults to 1.0.

1.0
model Optional[nn.Module]

Model to be used for the expert. Defaults to None.

None
tokenizer Optional[Union[ExpertTokenizer, PreTrainedTokenizerBase]]

Tokenizer to be used for the expert. Defaults to None.

None
device str

Device to be used for the expert. Defaults to 'cpu'.

'cpu'

Raises:

Type Description
ValueError

If the expert name is not found.

Returns:

Name Type Description
expert Expert

An instance of the expert.