آتشکار، مرضیه؛ علیپورحافظی، مهدی؛ و نوروزی، یعقوب (۱۳۹۲). شناسایی میزان آشنایی دانشجویان تحصیلات تکمیلی با پایگاههای گوگلاسکالر. نظامها و خدمات اطلاعاتی، ۹ (۱)، 61-78.
دلاور، علی (1396). روش تحقیق در روانشناسی و علوم تربیتی (ویرایش 4). تهران: ویرایش.
رهگشای، مرتضی (1390). مطالعه ابرموتورهای جستجو در پاسخگویی به سؤالات کاربران کتابداری و اطلاعرسانی و ارایه الگوی پیشنهادی جهت بهبود رتبهبندی نتایج جستجو. پایاننامه کارشناسی ارشد، دانشگاه پیام نور، مشهد.
ریاحینیا، نصرت؛ رحیمی، فروغ، لطیفی، معصومه؛ و بخشیان، لیلیالله. (۱۳۹۴). بررسی میزان انطباق ربط سیستمی و ربط کاربرمدارانه در پایگاههای اطلاعاتی SID- ISC – Google Scholar. تعامل انسان و اطلاعات، 2 (1)، 1-11.
سعدین، صبا؛ عباسپور، جواد؛ و ستوده، هاجر (زودآیند). مقایسه اثربخشی سامانههای پیشنهاددهنده مقالههای مرتبط در پایگاههای وبآوساینس و گوگلاسکالر. تحقیقات کتابداری و اطلاعرسانی دانشگاهی.
فرهودی، فائزه؛ حریری، نجلا (۱۳۹2). تأثیر ویژگیهای شخصیتی کاربران بر قضاوت ربط. پردازش و مدیریت اطلاعات، ۲۹ (۲)، 317-331.
Al-Maskari, A., Sanderson, M., & Clough, P. (2007). The relationship between IR effectiveness measures and user satisfaction. In Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, July 23-27, (pp. 773-774). Retrived October 30, 2019, from http://www.marksanderson.org/publications/my_papers/SIGIR2007-b.pdf.
Bar‐Ilan, J., Keenoy, K., Levene, M., & Yaari, E. (2009). Presentation bias is significant in determining user preference for search results—A user study. Journal of the American Society for Information Science and Technology, 60 (1), 135-149.
Bar‐Ilan, J., Keenoy, K., Yaari, E., & Levene, M. (2007). User rankings of search engine results. Journal of the American Society for Information Science and Technology, 58 (9), 1254-1266.
Bar-Ilan, J., Levene, M., & Mat-Hassan, M. (2006). Methods for evaluating dynamic changes in search engine rankings: a case study. Journal of Documentation, 62 (6), 708-729.
Beel, J., & Gipp, B. (2009). Google scholar's ranking algorithm: The impact of articles' age (an empirical study). In S. Latifi (Ed.), Proceedings of the 6th International Conference on Information Technology: New Generations, April 27-29, (pp. 160-164). IEEE. Retrieved October 30, 2019, from https://ieeexplore.ieee.org/document/5070610
Beg, M. S. (2005). A subjective measure of web search quality. Information Sciences, 169 (3-4), 365-381.
Char, D. C., & Ajiferuke, I. (2013, October). Comparison of the effectiveness of related functions in Web of Science and Scopus. In Proceedings of the Annual Conference of CAIS/Actes du Congrès Annuel de l'ACSI. Retrieved October 30, 2019, from http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=12BDD2E4D8D78A777A4BCDD5E8FD38B1?doi=10.1.1.181.382&rep=rep1&type=pdf
Drori, O. (2002). Algorithm for documents ranking: Idea and simulation results. In Proceedings of the 14th international conference on Software Engineering and Knowledge Engineering, July 15-19, (pp. 99-102). New York, NY: ACM.
Eto, M. (2013). Evaluations of context-based co-citation searching. Scientometrics, 94 (2), 651-673.
Hariri, N. (2011). Relevance ranking on Google: Are top ranked results really considered more relevant by the users? Online Information Review, 35 (4), 598-610.
Jansen, B. J., Spink, A., & Saracevic, T. (2000). Real life, real users, and real needs: a study and analysis of user queries on the web. Information Processing & Management, 36 (2), 207-227.
Joachims, T., Granka, L., Pan, B., Hembrooke, H., & Gay, G. (2005). Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 15-19, (pp. 154-161). New York, NY: ACM.
Kekäläinen, J. (2005). Binary and graded relevance in IR evaluations—comparison of the effects on ranking of IR systems. Information processing & management, 41 (5), 1019-1033.
Kinley, K., Tjondronegoro, D., Partridge, H., & Edwards, S. (2014). Modeling users' web search behavior and their cognitive styles. Journal of the Association for Information Science and Technology, 65 (6), 1107-1123.
Lewandowski, D. (2008). The retrieval effectiveness of web search engines: considering results descriptions. Journal of Documentation, 64 (6), 915-937.
Lingeman, J. M., & Yu, H. (2016). Learning to Rank Scientific Documents from the Crowd. arXiv preprint arXiv:1611.01400. Retrieved October 30, 2019, from https://arxiv.org/pdf/1611.01400.pdf
Martín-Martín, A., Orduña-Malea, E., Ayllón, J. M., & López-Cózar, E. D. (2014). Does Google Scholar contain all highly cited documents (1950-2013)? Retrieved October 30, 2019, from https://arxiv.org/ftp/arxiv/papers/1410/1410.8464.pdf
Nowicki, S. (2003). Student vs. search engine: Undergraduates rank results for relevance. Portal: Libraries and the Academy, 3 (3), 503-515.
Patil, S., Alpert, S. R., Karat, J., & Wolf, C. (2005,). “THAT’s what i was looking for”: Comparing user-rated relevance with search engine rankings. In IFIP Conference on Human-Computer Interaction, September 12-16, (pp. 117-129). Berlin, Heidelberg: Springer.
Sakai, T. (2007, May). On Penalising Late Arrival of Relevant Documents in Information Retrieval Evaluation with Graded Relevance. In The First International Workshop on Evaluating Information Access (EVIA), May 15, (pp. 32-43). Retrieved October 30, 2019, from http://research.nii.ac.jp/ntcir/ntcir-ws6/OnlineProceedings/EVIA_Preprint_Papers/1.pdf
Sanderson, M., Paramita, M. L., Clough, P., & Kanoulas, E. (2010, July). Do user preferences and evaluation measures line up? In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, (pp. 555-562). Retrieved October 30, 2019, from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.190.9251&rep=rep1&type=pdf
Su, L. T. (2003). A comprehensive and systematic model of user evaluation of Web search engines: II. An evaluation by undergraduates. Journal of the American Society for Information Science and Technology, 54 (13), 1193-1223.
Teevan, J., Dumais, S. T., & Horvitz, E. (2005). Personalizing search via automated analysis of interests and activities. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, August 15-19, (pp. 449-456). New York, NY: ACM.
Vallez, M., & Pedraza-Jimenez, R. (2007). Natural language processing in textual information retrieval and related topics. Hypertext.net, 5. Retrieved October 30, 2019, from https://www.upf.edu/hipertextnet/en/numero-5/pln.html
Wang, Y., Wang, L., Li, Y., He, D., & Liu, T. Y. (2013, June). A theoretical analysis of NDCG type ranking measures. In Conference on Learning Theory, June 12-14, (pp. 25-54). Retrieved October 30, 2019, from http://proceedings.mlr.press/v30/Wang13.pdf
Yoon, S. H., Kim, S. W., Kim, J. S., & Hwang, W. S. (2011). On computing text-based similarity in scientific literature. In Proceedings of the 20th International Conference Companion on World Wide Web, March 28 - April 1, (pp. 169-170). New York, NY: ACM.
ارسال نظر درباره این مقاله