References
[1]
Balaji, P.G. and Srinivasan, D. 2010. An introduction to
multi-agent systems. Innovations in multi-agent systems and
applications - 1. D. Srinivasan and L.C. Jain, eds. Springer Berlin
Heidelberg. 1–27.
[2]
Brown, T. et al. 2020. Language
models are few-shot learners. Advances in neural information
processing systems (2020), 1877–1901.
[3]
Chang, Y. et al. 2023. A survey on evaluation of large
language models. arXiv preprint arXiv:2307.03109.
(2023).
[4]
Chen, M. et al. 2021. Evaluating large language models
trained on code.
[5]
Introduction to t-SNE: 2023. https://www.datacamp.com/tutorial/introduction-t-sne.
[6]
Ji,
Z. et al. 2023. Survey of hallucination in natural language generation.
ACM Computing Surveys. 55, 12 (2023), 1–38.
DOI:https://doi.org/10.1145/3571730.
[7]
[8]
[9]
Li,
H. et al. 2022. A survey on
retrieval-augmented text generation. arXiv preprint
arXiv:2202.01110. (2022).
[10]
Mialon, G. et al. 2023. Augmented language models: A
survey. (2023).
[11]
Ouyang, L. et al. 2022. Training language models to
follow instructions with human feedback.
[12]
Radford, A. et al. 2018. Improving language
understanding by generative pre-training. (2018).
[13]
Radford, A. et al. 2019. Language
models are unsupervised multitask learners. (2019).
[14]
ReAct: Synergizing reasoning and acting in
language models: 2022. https://react-lm.github.io/.
[15]
Schuster, M. and Nakajima, K. 2012. Japanese and korean voice
search.
[16]
Sennrich, R. et al. 2016. Neural machine translation of
rare words with subword units.
[17]
Talebirad, Y. and Nadiri, A. 2023. Multi-agent collaboration:
Harnessing the power of intelligent LLM agents.
[18]
Vaswani, A. et al. 2023. Attention is all you
need.
[19]
Wang, C. et al. 2019. Neural machine translation with
byte-level subwords.
[20]
[21]
Yang, J. et al. 2023. Harnessing the power of LLMs in
practice: A survey on ChatGPT and beyond.
[22]
Yao, S. et al. 2022. ReAct: Synergizing reasoning and
acting in language models. arXiv preprint arXiv:2210.03629.
(2022).
[23]
Zhang, Y. et al. 2023. Siren’s song in the AI ocean: A
survey on hallucination in large language models. arXiv preprint
arXiv:2309.01219. (2023).
[24]
Zhao, W.X. et al. 2023. A survey of large language
models.