سنجش شباهت نظرات داوری آزاد و محتوای مقالات علمی به‌روش پردازش زبان طبیعی

نوع مقاله: مقاله پژوهشی

نویسندگان

1 دانشجوی دکترای علم اطلاعات و دانش‌شناسی، واحد بین‌الملل دانشگاه شیراز، شیراز، ایران

2 دانشیار گروه علم اطلاعات و دانش‌شناسی، دانشگاه شیراز، شیراز، ایران

3 استادیار گروه مهندسی و علوم کامپیوتر و فناوری اطلاعات، دانشگاه شیراز، شیراز، ایران

10.30484/nastinfo.2020.2480.1937

چکیده

هدف: شناسایی قابلیت داوری‌های آزاد در بازشناخت مقالات پزشکی براساس شباهت آنها به مقالات مربوط.
روششناسی: آزمونی متشکل از 2212 مقاله اف‌هزار ریسرچ و نظر‌ات داوری آنها ساخته شد. 100 مقاله به‌عنوان مدرک پایه به­صورت تصادفی انتخاب شد. شباهت نظرات داوری و محتواهای مدارک براساس سنجۀ شباهت کسینوسی مقادیر TF-IDF در سطح تک‌واژه‌ها و دوواژه‌ها محاسبه شد. شباهت محتوا و نظرات با تحلیل همبستگی اسپیرمن تحلیل شد. صحت پیش‌بینی شباهت محتوای مقالات براساس شباهت نظرات دریافت‌شده به‌کمک منحنی مشخصه عملکرد سامانه آزمون شد.
یافته‌ها: توان نظرات داوران در بازشناخت مقالات مشابه تأیید شد. میان محتوا و نظرات، همبستگی معنادار وجود دارد. منحنی‌های تحلیل عملکرد سامانه نیز نشان داد شباهت نظرات داوری، خواه در سطح تک‌واژه‌ها و خواه دوواژه‌ای‌ها توانایی شناسایی مقالات با محتوای مشابه را دارد.
نتیجه‌گیری: اعتبار نظرات داوران ریشه در توان تخصصی و شناختی آنان دارد. بنابراین، نظرات می‌توانند در شبکه مدارک، در زمره منابع مرتبط اثربخش در بازشناخت مدارک به‌شمار آیند. این یافته راه را برای پژوهش در کاربرد نظرات کاربران در حوزه‌های بازیابی، ارزیابی، یا طبقه‌بندی متون هموار می‌کند که شباهت محتوایی در آنها اهمیت دارد.

کلیدواژه‌ها

موضوعات


عنوان مقاله [English]

Measuring Similarities between Open Peer Review Comments and Contents of Scientific Articles: a Natural Language Processing Technique Inquiry

نویسندگان [English]

  • K. Rashidi Sharifabad 1
  • H. Sotodeh 2
  • M. Mirzabeigi 2
  • S.M. Fakhrahmad 3
1 PhD Candidate, Knowledge and Information Science, Shiraz University, Shiraz, Iran
2 Associate Professor, Knowledge and Information Science, Shiraz University, Shiraz, Iran
3 Assistant Professor, Computer Science and Engineering and Information Technology, Shiraz
چکیده [English]

Purpose: The social web provides a platform for publicizing open peer review reports. In this sphere, journal readers, authors, editors, and reviewers can involve in multilateral discussions on the reviewed papers and share their comments and viewpoints on the merits and probable pitfalls of papers. Open peer review comments may, hence, reflect the features of their mother articles. To identify this potential, the present study investigates to what extent similar comments accurately predict similar papers.
Methodology: Applying natural language processing techniques, it analyzes the contents of a sample of papers in medicine and life sciences and the comments received by them. To do so, a test collection is built from the papers openly published on F1000Research, an open access publishing platform that adheres to an open peer reviewing process by transparently providing the public with peer review reports, authors’ responses, and users’ comments. The test collection consists of 2212 papers and their comments. 100 papers are randomly selected as seed documents that serve as queries. The similarities between the comments and the contents of the papers are calculated using Cosine similarity of TF-IDF values. The TF-IDF values are calculated for both unigrams and bigrams extracted from the contents and comments. The correlation between the content and comment similarities is analyzed using Spearman correlation, given the non-normality of the data distributions. The accuracy of prediction of the papers’ content similarity by the similarity of their comments is tested using Receiver Operating Characteristic (ROC) curves.
Findings: The results of the Spearman correlation revealed a significant correlation between the content and comment similarities. This signifies that similar papers are more likely to receive similar comments and vice versa. The ROC curves show that similar comments can significantly identify similar papers, either at unigram or bigram level. The prediction is highly accurate.
Conclusion: Similar comments are effective in representing similar papers. In other words, similar comments are expected to present similar papers. This finding has implications for interactive information retrieval systems, where users are interested in reading experts’ comments on a given paper before viewing or downloading the paper itself. The findings also may pave the path towards new studies about the application of the comments in such spheres as information retrieval, evaluation or classification, where content similarity is of importance.

کلیدواژه‌ها [English]

  • comments
  • peer reviewing
  • Natural Language Processing
  • similarity
  • ROC curve analysis
Abbasi, M. A., & Liu, H. (2013). Measuring user credibility in social media. In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction, April 2-5, (pp. 441-448). Berlin, Heidelberg: Springer

Agarwal, S., Choubey, L., & Yu, H. (2010). Automatically classifying the role of citations in biomedical articles. In Proceedings of American Medical Informatics Association Fall Symposium (AMIA), (pp. 11-15). Retrieved June 6, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3041379/

Bennett, P. N., El-Arini, K., Joachims, T., & Svore, K. M. (2012). Enriching information retrieval. ACM Sigir Forum, 45 (2), 60-65. Retrieved June 6, 2020, from https://www.cs.cornell.edu/people/tj/publications/bennett_etal_11a.pdf

Boldt, A. (2011). Extending ArXiv. org to achieve open peer review and publishing. Journal of Scholarly Publishing42 (2), 238-242. ‏

Brennan, P. (2012). Uses of computational stylometry to determine demographics for online reputation management. Language Engineering for Online Reputation Management, 15-16. Retrieved June 6, 2020, from http://www.lrec-conf.org/proceedings/lrec2012/workshops/15.LREC%202012%20Online%20Reputation%20Proceedings.pdf

Budden, A. E., Tregenza, T., Aarssen, L. W., Koricheva, J., Leimu, R., & Lortie, C. J. (2008). Double-blind review favours increased representation of female authors. Trends in Ecology & Evolution23 (1), 4-6.‏

Cao, Z., Li, S., Liu, Y., Li, W., & Ji, H. (2015). A novel neural topic model and its supervised extension. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. Retrieved June 6, 2020, from https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/view/9303/9544

Chaa, M., Nouali, O., & Bellot, P. (2018). Combining Tags and Reviews to Improve Social Book Search Performance. In International Conference of the Cross-Language Evaluation Forum for European Languages, September 10-14, (pp. 64-75). Cham: ‏ Springer.

Chelaru, S., Orellana-Rodriguez, C., & Altingovde, I. S. (2014). How useful is social feedback for learning to rank YouTube videos? World Wide Web, 17 (5), 997-1025.

Constantin, C., Du Mouza, C., Faget, Z., & Rigaux, P. (2011). The melodic signature index for fast ontent-based retrieval of symbolic scores. In 12th International Society for Music Information Retrieval Conference (ISMIR), October 24-28, (pp. 363-368). Retrieved June 6, 2020, from http://ismir2011.ismir.net/papers/PS3-2.pdf

Cronin, B. (2009). Vernacular and vehicular language. Journal of the American Society for Information Science and Technology, 60 (3), 433-433.

Cui, H., Mittal, V., & Datar, M. (2006). Comparative experiments on sentiment classification for online product reviews. In The Twenty-First National Conference on Artificial Intelligence (AAAI-06), July 16–20, (pp. 1265-1270). Retrieved June 6, 2020, from https://static.googleusercontent.com/media/research.google.com/en/ir/pubs/archive/4.pdf

Dall'Aglio, P. (2006). Peer review and journal models. arXiv preprint physics/0608307. Retrieved June 6, 2020, from https://arxiv.org/pdf/physics/0608307.pdf

Daniel, H. D. (1993). An evaluation of the peer review process at Angewandte Chemie. Angewandte Chemie International Edition in English32 (2), 234-238.

Doslu, M., & Bingol, H. O. (2016). Context sensitive article ranking with citation context analysis. Scientometrics, 108, 653-671.

Eickhoff, C., Li, W., & De Vries, A. P. (2013). Exploiting user comments for audio-visual content indexing and retrieval. In European Conference on Information Retrieval, March 24-27, (pp. 38-49). Berlin, Heidelberg: Springer.

Ford, E. (2013). Defining and characterizing open peer review: a review of the literature. Journal of Scholarly Publishing, 44 (4), 311-326.

Gillespie, G. W., Chubin, D. E., & Kurzon, G. M. (1985). Experience with NIH peer review: Researchers' cynicism and desire for change. Science, Technology, & Human Values10 (3), 44-54.

Golcher, F., & Reznicek, M. (2011). Stylometry and the interplay of topic and L1 in the different annotation layers in the FALKO corpus. Retrieved June 6, 2020, from https://edoc.hu-berlin.de/bitstream/handle/18452/2022/golcher.pdf?sequence=1

Hartley, J., & Kostoff, R. N. (2003). How useful are key words' in scientific journals? Journal of Information Science, 29 (5), 433-438.

Hille, S., & Bakker, P. (2014). Engaging the social news user: Comments on news sites and Facebook. Journalism Practice, 8 (5), 563-572.

Hsu, C. F., Khabiri, E., & Caverlee, J. (2009). Ranking comments on the social web. In International Conference on Computational Science and Engineering (Vol. 4), August 29-31, (pp. 90-97). Calif., Los Alamitos: IEEE Computer Society.

Hu, M., Sun, A., & Lim, E. P. (2007). Comments-oriented blog summarization by sentence extraction. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management (pp. 901-904). Retrieved June 9, 2020, from             https://dl.acm.org/doi/10.1145/1321440.1321571

Hu, M., Sun, A., & Lim, E. P. (2008). Comments-oriented document summarization: Understanding documents with readers' feedback. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, July 20-24, (pp. 291-298). New York, NY: Association for Computing Machinery.

Ito, E., & Shimizu, K. (2012). Frequency and link analysis of online novels toward social contents ranking. In 2012 Second International Conference on Cloud and Green Computing, November 1-3, (pp.531-536). IEEE. Retrieved June 6, 2020, from    https://ieeexplore.ieee.org/document/6382867

 Jenuwine, E. S., & Floyd, J. A. (2004). Comparison of Medical Subject Headings and text-word searches in MEDLINE to retrieve studies on sleep in healthy individuals. Journal of the Medical Library Association, 92 (3), 349-353.

Lewis, D. D., & Jones, K. S. (1996). Natural language processing for information retrieval. Communications of the ACM, 39 (1), 92-101.

Li, Q., Wang, J., Chen, Y. P., & Lin, Z. (2010). User comments for news recommendation in forum-based social media. Information Sciences180 (24), 4929-4939.

Link, A. M. (1998). US and non-US submissions: an analysis of reviewer bias. JAMA280 (3), 246-247.

Manikonda, L., Meduri, V. V., & Kambhampati, S. (2016). Tweeting the mind and instagramming the heart: Exploring differentiated content sharing on social media. In The Tenth International AAAI Conference on Web and Social Media, May 17-20. Retrieved June 6, 2020, from https://arxiv.org/pdf/1603.02718.pdf

Mishne, G., & Glance, N. (2006, May). Leave a reply: an analysis of weblog comments. In Third Annual Workshop on the Weblogging Ecosystem. Retrieved June 6, 2020, from http://leonidzhukov.net/hse/2011/seminar/papers/www2006-blogcomments.pdf

Nentwich, M. (2005). Quality control in academic publishing: Challenges in the age of cyberscience. Poiesis & Praxis, 3 (3), 181-198.

Nicholson, J., & Alperin, J. P. (2016). A brief survey on peer review in scholarly communication. The Winnower. Retrieved June 6, 2020, from https://thewinnower.com/papers/4659-a-brief-survey-on-peer-review-in-scholarly-communication#submit

O’reilly, T. (2005). What is web 2.0: Design Patterns and business models for the next generation of software. Retrieved June 6, 2020, from https://www.oreilly.com/pub/a/web2/archive/what-is-web-20.html

Peters, D. P., & Ceci, S. J. (1982). Peer review practices of psychological journals: the fate of published articles, submitted again. Behavioral and Brain Sciences, 5 (2), 187-195.

Potthast, M. (2009). Measuring the descriptiveness of web comments. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, July 19-23, (pp. 724-725). New York, NY: ACM.

Potthast, M., Stein, B., & Becker, S. (2010). Towards comment-based cross-media retrieval. Retrieved June 6, 2020, from https://dl.acm.org/doi/pdf/10.1145/1772690.1772858

Potthast, M., Stein, B., Loose, F., & Becker, S. (2012). Information (2010). Towards comment-based cross-media retrieval in. In Proceedings of the comment sphere. ACM Transactions19th international conference on Intelligent Systems and Technology (TIST), 3(4), 68. World wide web (pp. 1169-1170). Retrieved June 9, 2020, from            https://dl.acm.org/doi/10.1145/2337542.2337553

Ritchie, A., Teufel, S., & Robertson, S. (2006). How to find better index terms through citations. In Proceedings of the Workshop on How Can Computational Linguistics Improve Information Retrieval? (pp. 25-32). Retrieved June 6, 2020, from              https://dl.acm.org/doi/pdf/10.5555/1629808.1629813

Ross, J. S., Gross, C. P., Desai, M. M., Hong, Y., Grant, A. O., Daniels, S. R., et al. (2006). Effect of blinded peer review on abstract acceptance. JAMA295 (14), 1675-1680.

Ross-Hellauer, T. (2017). What is open peer review? a systematic review. F1000Research, 6. Retrieved June 6, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5437951/

San Pedro, J., Yeh, T., & Oliver, N. (2012). Leveraging user comments for aesthetic aware image search re-ranking. In Proceedings of the 21st international conference on World Wide Web, April 16- 20, (pp. 439-448). New York, NY: ACM.

Shu, K., Wang, S., Tang, J., Zafarani, R., & Liu, H. (2017). User identity linkage across online social networks: a review. Acm Sigkdd Explorations Newsletter18 (2), 5-17. ‏

Stvilia, B., & Jörgensen, C. (2010). Member activities and quality of tags in a collection of historical photographs in Flickr. Journal of the American Society for Information Science and Technology61 (12), 2477-2489.

Thomas, B., Vinod, P., & Dhanya, K. A. (2014). Multiclass emotion extraction from sentences. International Journal of Scientific & Engineering Research5 (2), 12-15.

Travis, G. D. L., & Collins, H. M. (1991). New light on old boys: Cognitive and institutional particularism in the peer review system. Science, Technology, & Human Values16 (3), 322-341. ‏

Tregenza, T. (2002). Gender bias in the refereeing process? Trends in Ecology & Evolution, 17 (8), 349-350. ‏

Voss, J. (2007). Tagging, folksonomy & co-renaissance of manual indexing? arXiv:cs/0701072. Retrieved June 6, 2020, from https://arxiv.org/pdf/cs/0701072.pdf

Wang, P., Rath, M., Deike, M., & Wu, Q. (2016). Open peer review: an innovation in scientific publishing. Retrieved June 6, 2020, from https://core.ac.uk/download/pdf/158312603.pdf

Wang, X., Bian, J., Chang, Y., & Tseng, B. (2012). Model news relatedness through user comments. In Proceedings of the 21st International Conference on World Wide Web, April 16-20, (pp. 629-630). Retrieved June 6, 2020, from https://dl.acm.org/doi/pdf/10.1145/2187980.2188161

Wilson, C., Boe, B., Sala, A., Puttaswamy, K. P., & Zhao, B. Y. (2009). User interactions in social networks and their implications. In Proceedings of the 4th ACM European Conference on Computer Systems (pp. 205-218). New York, NY: ACM. Retrieved June 9, 2020, from        https://sites.cs.ucsb.edu/~ravenben/publications/pdf/interaction-eurosys09.pdf

Yamamoto, D., Masuda, T., Ohira, S., & Nagao, K. (2008). Video scene annotation based on web social activities. IEEE Multimedia15 (3), 22-32.

Yee, W. G., Yates, A., Liu, S., & Frieder, O. (2009). Are web user comments useful for search? Retrieved June 6, 2020, from http://ceur-ws.org/Vol-480/paper7.pdf

Yin, X. C., Zhang, B. W., Cui, X. P., Qu, J., Geng, B., Zhou, F., et al. (2016). ISART: a generic framework for searching books with social information. PloS one, 11 (2), e0148479. Retrieved June 6, 2020, from        https://doi.org/10.1371/journal.pone.0148479

Zuva, K., & Zuva, T. (2012). Evaluation of information retrieval systems. International Journal of Computer Science & Information Technology, 4 (3), 35-43.