Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create an initial framework for an Autograder for the monthly challenges. #94

Closed
victor-onofre opened this issue May 10, 2024 · 4 comments · Fixed by #107
Closed

Create an initial framework for an Autograder for the monthly challenges. #94

victor-onofre opened this issue May 10, 2024 · 4 comments · Fixed by #107
Assignees

Comments

@victor-onofre
Copy link
Contributor

victor-onofre commented May 10, 2024

For the mentorship program and the monthly challenges, a proposal is being developed to ensure impartiality in the code analysis. It is suggested to create a grader capable of processing multiple inputs given the function name,
the reference challenge to design the grade. To achieve this, the following points are considered:

  • The grader should be easily configurable for each challenge.
  • A minimal example with a previous challenge is required.
  • It should include the function name, the expected inputs, and display the corresponding output.
  • Generic tests should be provided for each function.
  • A guide on how to generate an open test should be included, as well as at least five private tests to validate the response's performance.
  • In case of receiving an error, an exception indicating that the code is incorrect should be generated.
  • Optionally, a suggestion may be made to distinguish between programming errors and general response errors.

Para el programa de mentoría y los retos mensuales, se está desarrollando una propuesta para garantizar la imparcialidad en el análisis del código. Se sugiere crear un grader capaz de procesar múltiples entradas dado el nombre de la función,
el reto de referencia para diseñar el grader. Para ello, se consideran los siguientes puntos:

  • El grader debe ser fácilmente configurable para cada reto.
  • Se requiere un ejemplo mínimo con un desafío anterior.
  • Debe incluir el nombre de la función, las entradas esperadas y mostrar la salida correspondiente.
  • Se deben proporcionar pruebas genéricas para cada función.
  • Es necesario incluir una guía sobre cómo generar una prueba abierta, así como al menos cinco pruebas privadas para validar el rendimiento de la respuesta.
  • En caso de recibir un error, se debe generar una excepción que indique que el código no es correcto.
  • Opcionalmente, se puede proponer un indicador para distinguir entre errores de programación y errores de respuesta general.
@arav-agarwal2
Copy link

I would love to work on this for unitaryhack - Can I be assigned this issue?

@ljcamargo
Copy link
Contributor

Hi @victor-onofre please check proposal #107 for closing this

@ljcamargo
Copy link
Contributor

Hi @victor-onofre tomorrow is the deadline for the Unitary Hack, so I'm very rushed to see if this could be merged

@ljcamargo
Copy link
Contributor

hello again @victor-onofre I want to insist in this issue, Unitary Hack team extended the deadline till 26th in order to some dangling PR like got merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants