Daniel Scalena
Milan, Italy
Hi! I am Daniel
, a (double) first-year PhD student at the 🇮🇹 University of Milano - Bicocca and the 🇳🇱 University of Groningen working on interpretability, fairness and security of generative (and non-generative) Large Language Models. My supervisors are Elisabetta Fersini and Malvina Nissim.
My research focuses on the use of interpretability as a tool to make generative models safer, more reliable and less toxic to extend and improve their real-world applications.
In my spare time I take pictures and echo "from NL import infrastructure" > Milan.py
.
news
Oct 02, 2024 | 📜 Multi-property Steering paper accepted to BlackBoxNLP 2024 (@EMNLP 2024) and 📜 A gentle push funziona benissimo accepted @ CLIC-it conference! 🎉 |
---|---|
Jun 26, 2024 | 📚 New work available on arXiv: Multi-property Steering of Large Language Models with Dynamic Activation Composition |
Dec 07, 2023 | Presenting my poster and paper at the BlackBoxNLP workshop @EMNLP 2023 in Singapore 🇸🇬 |
Oct 26, 2023 | Graduated! Thesis here 🎓 |