humans & ai
Humans & AI
In this line of work, we mainly focus on how large language models and bots driven by then can influence economic behavior.
One of our goals is to contribute to the FinTech literature, by investigating aversion towards machines in investment scenarios, and taking a closer look at what happens in situations of conflict or adversity. We are also interested in what leads people to accept products that use augmented reality, and autonomous vehicles.
Publications
Niszczota, P., & Abbas, S. (2023). GPT has become financially literate: Insights from financial literacy tests of GPT and a preliminary test of how people use it as a source of advice. Finance Research Letters, 104333. https://doi.org/10.1016/j.frl.2023.104333
Summary: We assess the ability of GPT–a large language model–to serve as a financial robo-advisor for the masses, by using a financial literacy test. Davinci and ChatGPT based on GPT-3.5 score 66% and 65% on the financial literacy test, respectively, compared to a baseline of 33%. However, ChatGPT based on GPT-4 achieves a near-perfect 99% score, pointing to financial literacy becoming an emergent ability of state-of-the-art models. We use the Judge-Advisor System and a savings dilemma to illustrate how researchers might assess advice-utilization from LLMs.
Niszczota, P., & Rybicka, I. (2023). The credibility of dietary advice formulated by ChatGPT: Robo-diets for people with food allergies. Nutrition, 112076. https://doi.org/10.1016/j.nut.2023.112076
Summary: We applied a prominent large language model (ChatGPT) in nutritional sciences. ChatGPT was tested via 56 diets for 14 food allergens and at 4 restrictions levels. We tested the safety, accuracy and attractiveness of these ‘robo-diets’. ChatGPT produced balanced diets, but it was unsafe for one allergen. We discussed how the quality of robo-diets can improve in the future.
Niszczota, P., Conway, P. (2023). Judgments of research co-created by generative AI: Experimental evidence (arXiv:2305.11873). arXiv. https://doi.org/10.48550/arXiv.2305.11873
Summary: We test whether delegating parts of the research process to large language models leads people to distrust and devalue researchers and scientific output. Participants (N=402) considered a researcher who delegates elements of the research process to a PhD student or LLM, and rated (1) moral acceptability, (2) trust in the scientist to oversee future projects, and (3) the accuracy and quality of the output. People judged delegating to an LLM as less acceptable than delegating to a human (d = -0.78). Delegation to an LLM also decreased trust to oversee future research projects (d = -0.80), and people thought the results would be less accurate and of lower quality (d = -0.85).Niszczota, P., & Kaszás, D. (2020). Robo-investment aversion. PLOS ONE, 15(9), e0239277. https://doi.org/10.1371/journal.pone.0239277
Summary: In five experiments (N = 3,828), we investigate whether people prefer investment decisions to be made by human investment managers rather than by algorithms (“robos”). In all of the studies we investigate morally controversial companies, as it is plausible that a preference for humans as investment managers becomes exacerbated in areas where machines are less competent, such as morality. Overall, our findings show a considerable mean effect size for robo-investment aversion (d = –0.39 [–0.45, –0.32]).
Grzegorczyk, T., Sliwinski, R., & Kaczmarek, J. (2019). Attractiveness of augmented reality to consumers. Technology Analysis & Strategic Management, 31(11), 1257–1269. https://doi.org/10.1080/09537325.2019.1603368
Summary: We identified advantages and disadvantages of AR applications over their traditional counterparts influencing its consumer adoption. Our research confirms that both hedonic and utilitarian aspects of the user experience are important for AR's adoption.
Hryniewicz, K., & Grzegorczyk, T. (2020). How different autonomous vehicle presentation influences its acceptance: Is a communal car better than agentic one? PLOS ONE, 15(9), e0238714. https://doi.org/10.1371/journal.pone.0238714
Summary: Our research focuses on (1) the type of information concerning autonomous vehicles (AVs) that consumers seek and (2) how to communicate this technology in order to increase its acceptance. Based on two studies we show that people want to know whether AVs are communal and agentic, but they are more prone to accept a communal AV than agentic one.
This research was supported by grants 2021/42/E/HS4/00289 (SONATA BIS), 2018/31/D/HS4/01814 (SONATA) and 2018/02/X/HS4/01703 (MINIATURA) from the National Science Centre, Poland (Narodowe Centrum Nauki).