TALKS
Kallmayer, A. (2024, September). Graph representation learning for structured scene representations. Cardinal Mechanisms of Perception retreat, Germany.
Kallmayer, A. (2024, June). How scene grammar might structure neural representations of object and scene processing. Categorization in Perception and Action: Minds, Models, Mechanisms, Germany.
Võ, M. L.-H., & Kallmayer, A. (2024, May). Combining Generative Adversarial Networks (GANs) with behavior and brain recordings to study scene understanding [Symposium Presentation]. Vision Sciences Society Meeting, Florida, USA. doi:https://doi.org/10.1167/jov.24.10.228
Kallmayer, A., & Võ, M. (2024, March). Sticking together – hierarchical relationships between objects in scenes are reflected in neural activation patterns across time [Conference Talk]. Tagung experimentell arbeitender Psycholog*innen, Regensburg, Germany.
POSTERS (selected)
Kallmayer, A., & Võ, M. L.-H. (2024, May). Time-resolved brain activation patterns reveal hierarchical representations of scene grammar when viewing isolated objects [Poster]. Vision Sciences Society Meeting, Florida, USA. 10.1167/jov.24.10.655
Rothenberg, E., S., Kallmayer, A., Wiesmann, S., Võ, M. (March 2024). More is not always better: Temporal neural signatures of object-driven versus scene-driven human scene categorization. [Conference Poster]. Tagung experimentell arbeitender Psycholog*innen, Regensburg, Germany.
Bechar, D., Kallmayer, A., & Võ, M. (2023, August). I spy with my little eye… an anchor object! How anchor objects modulate eye-movements during visual search. [Conference Poster]. European Conference on Visual Perception, Cyprus, Italy.
Kallmayer, A., & Võ, M. (2023, May). How real can they get? Investigating neural responses to GAN generated scenes. [Conference Poster]. Vision Sciences Society Meeting, Florida, USA.
Kallmayer, A., & Võ, M. (2022, May). What makes a scene? Investigating generated scene information at different visual processing stages. [Conference Poster]. Vision Sciences Society Meeting, Florida, USA.
Kallmayer, A., & Võ, M. (2022, March). What’s in a scene? Investigating generated scene information at different visual processing stages. [Conference Poster]. Tagung experimentell arbeitender Psychologen online Konferenz.
Kallmayer, A., & Võ, M. (2021, August 22-27). Hierarchies in scenes – the role of object functions in shaping semantic networks. [Conference Poster]. European Conference on Visual Perception, online.
Kallmayer, A., Prince, J., & Konkle, T. (2020, October). Comparing representations that support object, scene, and face recognition using representational trajectory analysis. [Conference Poster]. Vision Sciences Society online conference.
Wang, R., Janini, D., Kallmayer, A., & Konkle, T. (2020, October). Mid-level feature differences support early EEG-decoding of animacy and object size distinctions. [Conference Poster]. Vision Sciences Society online conference.
Kallmayer, A., Draschkow, D., & Võ, M. (2018, August 26-30). Investigating viewpoint-dependence and context in object recognition using depth rotated 3D models in a sequential matching task. [Conference Poster]. European Conference on Visual Perception, Trieste, Italy.
Invited Talks (selected)
August/24 – Kietzmann lab
“Quantifying scene grammar representations”
November/23 – CVAI lab
“Making a scene – investigating the ingredients of real-world scenes”
December/22 – Interdisciplinary colloquium Polytechnische Gesellschaft
“Scene-Grammar: Wie wir unsere Welt visuell strukturiert wahrnehmen”
Peer-reviewed Publications
Kallmayer, A. & Võ, M. L.-H. (2024). Anchor objects drive realism while diagnostic objects drive categorization in GAN generated scenes. Commun Psychol 2, 68 (2024). https://doi.org/10.1038/s44271-024-00119-z
Kallmayer, A., Võ, M. L.-H., & Draschkow, D. (2023). Viewpoint dependence and scene context effects generalize to depth rotated three-dimensional objects. Journal of Vision, 23(9). doi:https://doi.org/10.1167/jov.23.10.9
Preprints
Kallmayer, A., Zacharias, L., Jetter, L., & Vo, M. L. (2024). Object representations reflect hierarchical scene structure and depend on high-level visual, semantic, and action information. https://doi.org/10.31234/osf.io/hs835