Daniel Scalena

When I’m old I want to be an astronaut 🚀

Milan, Italy

Hi! I am Daniel, a double PhD student at the 🇮🇹 University of Milano - Bicocca and the 🇳🇱 University of Groningen working on interpretability, fairness and security of generative (and non-generative) Large Language Models. My supervisors are Elisabetta Fersini and Malvina Nissim.

My research focuses on the use of interpretability as a tool to make generative models safer, more reliable and less toxic in order to extend and improve their real-world applications.


Dec 7, 2023 Presenting my poster and paper at the BlackBoxNLP workshop @EMNLP 2023 in Singapore 🇸🇬
Oct 26, 2023 Graduated! Thesis here 🎓
Jul 10, 2023 RewardLM (alpha) is now public! 🚀

latest posts

selected publications

  1. LetTheModelRespond.png
    Let the Models Respond: Interpreting Language Model Detoxification Through the Lens of Prompt Dependence
    Daniel Scalena, Gabriele Sarti, Malvina Nissim, and 1 more author
  2. mind_logo.png
    MIND at SemEval-2023 Task 11: From Uncertain Predictions to Subjective Disagreement
    Giulia Rizzi, Alessandro Astorino, Daniel Scalena, and 2 more authors
    In Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023), Jul 2023