Skip to content

A comprehensive metric toolset and reading list for evaluating NLG, VQA, etc.

License

Notifications You must be signed in to change notification settings

fesvhtr/QA-Eval

Repository files navigation

VideoQA-Eval

This repo contains a comprehensive metric toolset for evaluation NLG, videoQA, etc. tasks. We also offer a reading list of related papers and repositories.

Our MMEval Metric

MMEval is a novel LLM-based metric designed for Video Question Answering, which can utilize multimodal information to evaluate answers. You can find more in our new CVPR2024 paper CUVA and github repo.

Installation

Reading List & Acknowledgements

Thank the following repositories and authors for their contributions to VideoQA Metrics, Their work has been instrumental in enhancing our project and we are grateful for their efforts.

About

A comprehensive metric toolset and reading list for evaluating NLG, VQA, etc.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages