References

[1]
Balaji, P.G. and Srinivasan, D. 2010. An introduction to multi-agent systems. Innovations in multi-agent systems and applications - 1. D. Srinivasan and L.C. Jain, eds. Springer Berlin Heidelberg. 1–27.
[2]
Brown, T. et al. 2020. Language models are few-shot learners. Advances in neural information processing systems (2020), 1877–1901.
[3]
Chang, Y. et al. 2023. A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109. (2023).
[4]
[5]
[6]
Ji, Z. et al. 2023. Survey of hallucination in natural language generation. ACM Computing Surveys. 55, 12 (2023), 1–38. DOI:https://doi.org/10.1145/3571730.
[7]
[8]
[9]
Li, H. et al. 2022. A survey on retrieval-augmented text generation. arXiv preprint arXiv:2202.01110. (2022).
[10]
Mialon, G. et al. 2023. Augmented language models: A survey. (2023).
[11]
[12]
Radford, A. et al. 2018. Improving language understanding by generative pre-training. (2018).
[13]
Radford, A. et al. 2019. Language models are unsupervised multitask learners. (2019).
[14]
ReAct: Synergizing reasoning and acting in language models: 2022. https://react-lm.github.io/.
[15]
Schuster, M. and Nakajima, K. 2012. Japanese and korean voice search.
[16]
[17]
[18]
Vaswani, A. et al. 2023. Attention is all you need.
[19]
[20]
[21]
[22]
Yao, S. et al. 2022. ReAct: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629. (2022).
[23]
Zhang, Y. et al. 2023. Siren’s song in the AI ocean: A survey on hallucination in large language models. arXiv preprint arXiv:2309.01219. (2023).
[24]
Zhao, W.X. et al. 2023. A survey of large language models.