Design of text generator application with OpenAI GPT-3

DOI: https://doi.org/10.33650/jeecom.v5i2.6354

Authors (s)


(1) * Kaira Milani Fitria   (Informatics & Business Institute Darmajaya)  
        Indonesia
(*) Corresponding Author

Abstract


The increasing need for text content creation today challenges the development of systems that can alleviate the need for text creation. Currently, text generation is done manually and has various shortcomings, especially in terms of time constraints, human error, limited creativity, and writing that tends to be repetitive by certain people, which can cause a decrease in quality and diversity in the sentences produced. This research was conducted by designing an AI-based text generator application using the GPT-3 language model to generate text automatically and help overcome some obstacles. Applying this app will increase efficiency and productivity, increase the writer's ideas and creativity, automate routine tasks, and produce exciting and communicative sentences. The app's ability to generate text quickly and accurately and be personalized makes it valuable in various fields. The method used in this research is implementing the GPT-3 language model APIs into the text generator application created so that the application can connect with the GPT-3 engine that has been modified in its prompting method. The output of this application is a text that has been adjusted to the user's needs through keywords entered on the web interface system. The result is that the text generator application is good enough to be implemented in various fields, especially text content generation. 


Keywords

Artificial Intelligence;GPT-3;NLP;Text Generation;Web Application



Full Text: PDF



References


T. Brown et al., “Language Models are Few-Shot Learners,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., Curran Associates, Inc., 2020, pp. 1877–1901. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf

Z. Latinovic and S. C. Chatterjee, “Achieving the promise of AI and ML in delivering economic and relational customer value in B2B,” J Bus Res, vol. 144, pp. 966–974, 2022, doi: https://doi.org/10.1016/j.jbusres.2022.01.052.

K. R. Chowdhary, “Natural Language Processing,” in Fundamentals of Artificial Intelligence, K. R. Chowdhary, Ed., New Delhi: Springer India, 2020, pp. 603–649. doi: 10.1007/978-81-322-3972-7_19.

P. M. Nadkarni, L. Ohno-Machado, and W. W. Chapman, “Natural language processing: an introduction,” Journal of the American Medical Informatics Association, vol. 18, no. 5, pp. 544–551, Sep. 2011, doi: 10.1136/amiajnl-2011-000464.

M. Bahja, “Natural Language Processing Applications in Business,” in E-Business, R. M. X. Wu and M. Mircea, Eds., Rijeka: IntechOpen, 2020, p. Ch. 4. doi: 10.5772/intechopen.92203.

S. F. Chen and J. Goodman, “An empirical study of smoothing techniques for language modeling,” Comput Speech Lang, vol. 13, no. 4, pp. 359–394, 1999, doi: https://doi.org/10.1006/csla.1999.0128.

M. Sundermeyer, R. Schlüter, and H. Ney, LSTM Neural Networks for Language Modeling. 2012. doi: 10.21437/Interspeech.2012-65.

T. Mikolov and G. Zweig, “Context dependent recurrent neural network language model,” in 2012 IEEE Spoken Language Technology Workshop (SLT), 2012, pp. 234–239. doi: 10.1109/SLT.2012.6424228.

S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput, vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi: 10.1162/neco.1997.9.8.1735.

A. Radford, K. Narashiman, T. Salimans, and I. Sutskever, “Improving Language Understanding by Generative Pre-Training,” OpenAI, 2018, [Online]. Available: https://openai.com/blog/language-unsupervised/

C. M. Gevaert, M. Carman, B. Rosman, Y. Georgiadou, and R. Soden, “Fairness and accountability of AI in disaster risk management: Opportunities and challenges,” Patterns, vol. 2, no. 11, p. 100363, 2021, doi: https://doi.org/10.1016/j.patter.2021.100363.

A. Chan, “GPT-3 and InstructGPT: technological dystopianism, utopianism, and ‘Contextual’ perspectives in AI ethics and industry,” AI and Ethics, vol. 3, no. 1, pp. 53–64, 2023, doi: 10.1007/s43681-022-00148-6.

R. Dale, “GPT-3: What’s it good for?,” Nat Lang Eng, vol. 27, no. 1, pp. 113–118, 2021, doi: DOI: 10.1017/S1351324920000601.

D. Luitse and W. Denkena, “The great Transformer: Examining the role of large language models in the political economy of AI,” Big Data Soc, vol. 8, no. 2, p. 20539517211047736, Jul. 2021, doi: 10.1177/20539517211047734.

S. Y. Kim, H. Park, K. Shin, and K.-M. Kim, “Ask Me What You Need: Product Retrieval using Knowledge from GPT-3,” Jul. 2022, [Online]. Available: http://arxiv.org/abs/2207.02516

G. Generative Pretrained Transformer, A. Osmanovic Thunström, S. Steingrimsson, and S. Steingrimsson Can, “Can GPT-3 write an academic paper on itself, with minimal human input? GPT-3 write an academic paper on itself, with minimal human input?” [Online]. Available: www.openai.com

A. M. TURING, “I.—COMPUTING MACHINERY AND INTELLIGENCE,” Mind, vol. LIX, no. 236, pp. 433–460, Oct. 1950, doi: 10.1093/mind/LIX.236.433.

Kevin Lacker, “Giving GPT-3 a Turing Test,” Jul. 06, 2020.

Y. Wang, Q. Yao, J. T. Kwok, and L. M. Ni, “Generalizing from a Few Examples: A Survey on Few-Shot Learning,” ACM Comput. Surv., vol. 53, no. 3, Jun. 2020, doi: 10.1145/3386252.

D. Haluza and D. Jungwirth, “Artificial Intelligence and Ten Societal Megatrends: An Exploratory Study Using GPT-3,” Systems, vol. 11, no. 3, 2023, doi: 10.3390/systems11030120.

R. Singh, V. Garg, and GPT-3, “Human Factors in NDE 4.0 Development Decisions,” J Nondestr Eval, vol. 40, no. 3, p. 71, 2021, doi: 10.1007/s10921-021-00808-3.

Thomas J Ackermann, “GPT-3: a robot wrote this entire article. Are you scared yet, human?,” Artificial Intelligence: ANI, LogicGate Computing, AGI, ASI, Nov. 29, 2020.

M. Zhang and J. Li, “A commentary of GPT-3 in MIT Technology Review 2021,” Fundamental Research, vol. 1, no. 6, pp. 831–833, 2021, doi: https://doi.org/10.1016/j.fmre.2021.11.011.

B. Ding, C. Qin, L. Liu, L. Bing, S. Joty, and B. Li, “Is GPT-3 a Good Data Annotator?,” Dec. 2022, [Online]. Available: http://arxiv.org/abs/2212.10450

B. Lester, R. Al-Rfou, and N. Constant, The Power of Scale for Parameter-Efficient Prompt Tuning. 2021. doi: 10.18653/v1/2021.emnlp-main.243.

A. Vaswani et al., “Attention is all you need,” Adv Neural Inf Process Syst, vol. 2017-Decem, no. Nips, pp. 5999–6009, 2017.

J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, vol. 1, no. Mlm, pp. 4171–4186, 2019.

I. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, “Language Models are Unsupervised Multitask Learners,” OpenAI Blog, 2019, [Online]. Available: http://arxiv.org/abs/2007.07582


Article View

Abstract views : 166 times | PDF files viewed : 183 times

Dimensions, PlumX, and Google Scholar Metrics

10.33650/jeecom.v5i2.6354


Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Kaira Milani Fitria

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Creative Commons License
 
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Journal of Electrical Engineering and Computer (JEECOM)
Published by LP3M Nurul Jadid University, Indonesia, Probolinggo, East Java, Indonesia.