If you trained a model and want to feedback its predictions into refinery, e.g. to find records where your weakly supervised label didn't match the model prediction, you can now do so easily.
To collect this feedback, simply use our Python SDK. There are multiple wrappers for e.g. HuggingFace, PyTorch or Sklearn.
You can use these feedback loops to find e.g. hard wrong model predictions. These are cases, in which your model confidently predicts something, but the weakly supervised label says something different.
Currently only available for classification models
We're testing this feature on classification models for now. If it is widely used, we'll soon add a callback for extraction models.
Updated about 1 month ago