• Research article
  • Open access
  • Published: 16 March 2017

Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison

  • Larissa Shamseer   ORCID: orcid.org/0000-0003-3690-3378 1 , 2 ,
  • David Moher 1 , 2 ,
  • Onyi Maduekwe 3 ,
  • Lucy Turner 4 ,
  • Virginia Barbour 5 ,
  • Rebecca Burch 6 ,
  • Jocalyn Clark 7 ,
  • James Galipeau 1 ,
  • Jason Roberts 8 &
  • Beverley J. Shea 9  

BMC Medicine volume  15 , Article number:  28 ( 2017 ) Cite this article

67k Accesses

225 Citations

591 Altmetric

Metrics details

The Internet has transformed scholarly publishing, most notably, by the introduction of open access publishing. Recently, there has been a rise of online journals characterized as ‘predatory’, which actively solicit manuscripts and charge publications fees without providing robust peer review and editorial services. We carried out a cross-sectional comparison of characteristics of potential predatory, legitimate open access, and legitimate subscription-based biomedical journals.

On July 10, 2014, scholarly journals from each of the following groups were identified – potential predatory journals (source: Beall’s List), presumed legitimate, fully open access journals (source: PubMed Central), and presumed legitimate subscription-based (including hybrid) journals (source: Abridged Index Medicus). MEDLINE journal inclusion criteria were used to screen and identify biomedical journals from within the potential predatory journals group. One hundred journals from each group were randomly selected. Journal characteristics (e.g., website integrity, look and feel, editors and staff, editorial/peer review process, instructions to authors, publication model, copyright and licensing, journal location, and contact) were collected by one assessor and verified by a second. Summary statistics were calculated.

Ninety-three predatory journals, 99 open access, and 100 subscription-based journals were analyzed; exclusions were due to website unavailability. Many more predatory journals’ homepages contained spelling errors (61/93, 66%) and distorted or potentially unauthorized images (59/93, 63%) compared to open access journals (6/99, 6% and 5/99, 5%, respectively) and subscription-based journals (3/100, 3% and 1/100, 1%, respectively). Thirty-one (33%) predatory journals promoted a bogus impact metric – the Index Copernicus Value – versus three (3%) open access journals and no subscription-based journals. Nearly three quarters ( n  = 66, 73%) of predatory journals had editors or editorial board members whose affiliation with the journal was unverified versus two (2%) open access journals and one (1%) subscription-based journal in which this was the case. Predatory journals charge a considerably smaller publication fee (median $100 USD, IQR $63–$150) than open access journals ($1865 USD, IQR $800–$2205) and subscription-based hybrid journals ($3000 USD, IQR $2500–$3000).

Conclusions

We identified 13 evidence-based characteristics by which predatory journals may potentially be distinguished from presumed legitimate journals. These may be useful for authors who are assessing journals for possible submission or for others, such as universities evaluating candidates’ publications as part of the hiring process.

Peer Review reports

The Internet has transformed scholarly publishing. It has allowed for the digitalization of content and subsequent online experimentation by publishers, enabling print journals to host content online, and set the course for online open-access publishing. Nevertheless, an unwelcome consequence of the Internet age of publishing has been the rise of so-called predatory publishing.

In the traditional subscription model of publishing, journals typically require transfer of copyright from authors for articles they publish and their primary revenue stream is through fees charged to readers to access journal content, typically subscription fees or pay-per-article charges. Open access publishing, in contrast, typically allows for authors to retain copyright, and is combined with a license (often from Creative Commons), which enables free and immediate access to published content coupled with rights of reuse [ 1 ]. Some open access journals [ 2 ] and many hybrid journals (i.e., those with some open access content and also with non-open access content) [ 3 ] use a business model that relies upon publication charges (often called article publication or processing charges, or APC) to the author or funder of the research to permit immediate and free access.

Predatory publishing is a relatively recent phenomenon that seems to be exploiting some key features of the open access publishing model. It is sustained by collecting APCs that are far less than those found in presumably legitimate open access journals and which are not always apparent to authors prior to article submission. Jeffrey Beall, a librarian at the University of Colorado in Denver, first sounded the alarm about ‘predatory journals’ and coined the term. He initiated and maintains a listing of journals and publishers that he deems to be potentially, possibly, or probably predatory, called Beall’s List [ 4 ] (content unavailable at the time of publishing). Their status is determined by a single person (Jeffrey Beall), against a set of evolving criteria (in its 3rd edition at the time of writing) that Beall has based largely on The Committee On Publication Ethics (COPE) Code of Conduct for Journal Editors and membership criteria of the Open Access Scholarly Publisher’s Association [ 5 – 7 ]. Others have suggested similar criteria for defining predatory journals [ 8 , 9 ].

The phenomenon of predatory publishing is growing and opinions on its effects are divided. Critics say that it is extremely damaging to the scientific record and must be stopped [ 10 , 11 ]. Others feel that, while problematic, predatory publishing is a transient state in publishing and will disappear or become obvious over time [ 12 ]. A fundamental problem of predatory journals seems to be that they collect an APC from authors without offering concomitant scholarly peer review (although many claim to [ 13 ]) that is typical of legitimate journals [ 14 ]. Additionally, they do not appear to provide typical publishing services such as quality control, licensing, indexing, and perpetual content preservation and may not even be fully open access. They tend to solicit manuscripts from authors through repeated email invitations (i.e., spam) boasting open access, rapid peer review, and praising potential authors as experts or opinion leaders [ 13 ]. These invitations may seem attractive or an easy solution to inexperienced or early career researchers who need to publish in order to advance their career, or to those desperate to get a publication accepted after a number of rejections, or to those simply not paying attention. Predatory journals may also be a particular problem in emerging markets of scientific research, where researchers face the same pressure to publish, but lack the skills and awareness to discern legitimate journals from predatory ones.

Still, many researchers and potential authors are not aware of the problem of predatory journals and may not be able to detect a predatory journal or distinguish one from a legitimate journal. In order to assist readers, potential authors, and others in discerning legitimate journals from predatory journals, it would be useful to compare characteristics from both predatory and non-predatory journals to see how they differ.

In this study, we undertook a cross-sectional study comparing the characteristics of three types of biomedical journals, namely (1) potential predatory journals, (2) presumed legitimate, fully open access journals, and (3) presumed legitimate subscription-based biomedical journals that may have open access content (e.g., hybrid).

This was a cross-sectional study.

Journal identification and selection

We searched for journals on July 10, 2014. For feasibility, only journals with English-language websites were considered for inclusion and we set out to randomly select 100 journals within each comparison group. The following selection procedures were used to identify journals within each comparison group:

Potential predatory journals (‘Predatory’): We considered all journals named on Beall’s List of single publishers for potential inclusion. We applied the MEDLINE Journal Selection criteria [ 15 ]: “[Journals] predominantly devoted to reporting original investigations in the biomedical and health sciences, including research in the basic sciences; clinical trials of therapeutic agents; effectiveness of diagnostic or therapeutic techniques; or studies relating to the behavioural, epidemiological, or educational aspects of medicine. ” Three independent assessors (OM, DM, LS) carried out screening in duplicate. From the identified biomedical journals, a computer-generated random sample of 100 journals was selected for inclusion. Journals that were excluded during data extraction were not replaced.

Presumed legitimate fully open-access journals (‘Open Access’): A computer-generated, random sample of 95 journals from those listed on PubMed Central as being full, immediate open access, were included. In addition, five well-established open access journals were purposefully included: PLOS Medicine , PLOS One , PLOS Biology , BMC Medicine , and BMC Biology .

Presumed legitimate subscription-based journals (‘Subscription-based’): A computer-generated, random sample of 100 journals from those listed in the Abridged Index Medicus (AIM) was included. AIM was initiated in 1970 containing a selection of articles from 100 (now 119) English-language journals, as a source of relevant literature for practicing clinicians [ 16 ]. AIM was used here since all journals in this group were initiated prior to the digital era and presumed to have a maintained a partially or fully subscription-based publishing model [confirmed by us].

For all journals, their names and URLs were automatically obtained during the journal selection process and collected in Microsoft Excel. Screening and data extraction were carried out in the online study management software, Distiller SR (Evidence Partners, Ottawa, Canada). Journals with non-functioning websites at the time of data extraction or verification were excluded and not replaced.

Data extraction process

Data were extracted by a single assessor (OM) between October 2014 and February 2015. An independent audit (done by LS) of a random 10% of the sample showed discrepancies in 34/56 items (61%) on at least one occasion. As such, we proceeded to verify the entire sample by a second assessor. Verification was carried out in April 2015 by one of eight assessors (RB, JC, JG, DM, JR, LS, BJS, LT) with experience and expertise on various aspects of biomedical publishing process. Any disagreements that arose during the verification process were resolved by third party arbitration (by LS or LT). It was not possible to fully blind assessors to study groups due to involvement in the journal selection process (OM, DM, LS).

Data extraction items

Items for which data were extracted were based on a combination of items from Beall’s criteria (version 2, December 2012) for determining predatory open-access publishers [ 6 ], the COPE Code of Conduct for Journal Publishers ( http://publicationethics.org/resources/code-conduct ), and the OASPA Membership criteria ( http://oaspa.org/membership/membership-criteria/ ). Data for 56 items were extracted in the following nine categories: aims and scope, journal name and publisher, homepage integrity (look and feel), indexing and impact factor, editors and staff, editorial process and peer review, publication ethics and policies, publication model and copyright, and journal location and contact.

Data analysis

Data were descriptively summarized within each arm. Continuous data were summarized by medians and interquartile range (IQR); dichotomous data were summarized using proportions.

Ninety-three potential predatory journals, 99 open access journals, and 100 subscription-based journals were included in the analysis. The process of journal identification, inclusion, and exclusions within each study group is outlined in Fig.  1 ; 397 journals were identified as potential predatory journals. After de-duplication and screening for journals publishing biomedical content, 156 journals were identified, from which a random sample of 100 were chosen. Seven journals from the predatory group and one from the legitimate open access group were excluded during data extraction due to non-functional websites. No journal appeared in more than one study group.

Flow diagram of journal identification, selection, and inclusion in each study group. a Potential predatory journals identified from Beall’s list. b Presumed legitimate fully open access journals identified from PubMed Central including five purposely selected journals: PLOS Medicine , PLOS One , PLOS Biology , BMC Medicine , and BMC Biology . c Subscription-based journals identified from AIM

There were four unanticipated journal exclusions during data extraction in the presumed legitimate open access and subscription-based groups for which randomly selected replacement journals were used. One journal was listed twice in the open access group and was deemed to be a magazine rather than a scientific journal. Two journals in the subscription-based journal group were deemed to be a magazine and a newsletter, respectively. The decision to exclude and replace these was made post-hoc, by agreement between LS and DM.

Our main findings of journal characteristics for each data extraction category are summarized in Tables  1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 and 9 .

Homepage and general characteristics

About half of the predatory journals in our sample indicated interest in publishing non-biomedical topics (e.g., agriculture, geography, astronomy, nuclear physics) alongside biomedical topics in the stated scope of the journal and seemed to publish on a larger number of topics than non-predatory journals (Table  1 ). Predatory journals included pharmacology and toxicology ( n  = 59) in the scope of their journal four and a half times more often than open access journals ( n  = 13) and almost 30 times more than subscription-based journals ( n  = 2).

When we examined the similarity of the journal name to other existing journals (e.g., one or two words different on the first page of Google search results), we found that over half of predatory journals ( n  = 51, 55.84%) had names that were similar to an existing journal compared to only 17 open access journals (17.17%) and 22 subscription-based journals (22.00%) (Table  2 ). In all study groups, the journal name was well reflected in the website URL. For journals that named a country in the journal title, some journals named a different country in the journal contact information (11/21 (52.38%) predatory; 4/13 (30.77%) open access; 1/31 (3.23%) subscription-based) (Table  3 ). There was a high prevalence of predatory journals from low or low- to middle-income countries (LMICs) (48/64, 75.00%) compared to open access journals (18/92, 19.56%); none of the subscription-based journals listed LMIC addresses.

We assessed the integrity of the homepage by examining the content for errors (Table  4 ). Spelling and grammatical errors were more prevalent in predatory journals ( n  = 61, 65.59%) compared to in open access ( n  = 6, 6.06%) and subscription-based journals ( n  = 3, 3.00%). In addition, we found a higher frequency of distorted or potentially unauthorized image use (e.g., company logos such as Google, MEDLINE, COPE, Crossref) in predatory journals (n = 59, 63.44%) versus in open access ( n  = 5, 5.05%) and subscription-based journals ( n  = 1, 1%). Readers were the main target of language used on subscription-based journal webpages ( n  = 58, 58%) but less so in open access ( n  = 14, 14.14%) and predatory ( n  = 3, 3.23%) journals, where authors (predatory journals) or both authors and readers (open access journals) were the primary target.

Metrics and indexing

Most subscription-based journals indicated having a journal impact factor (assumed 2-year Thomson Reuters JIF unless otherwise indicated) ( n  = 80, median 4.275 (IQR 2.469–6.239)) compared to less than half of open access journals ( n  = 38, 1.750 (1.330–2.853)) and fewer predatory journals ( n  = 21, 2.958 (0.500–3.742)) (Table  5 ). More than half of predatory journals ( n  = 54, 58.06%) and subscription-based journals ( n  = 62, 62%) mentioned another journal-level metric, compared to only 16 (16.16%) open access journals. A metric called the Index Copernicus Value was the most common other metric mentioned in 31 predatory journals (33.33%) and in three open access journals (3.03%), followed by the 5-year impact factor (Thomson Reuters) mentioned in two open access journals (2.02%) and 27 subscription-based journals (27.00%), followed by the Scientific Journal Rankings (i.e., SCImago Journal Rank by Scopus) mentioned in seven predatory, six open access, and eight subscription-based journals. The top databases in which journals indicated being indexed were Google Scholar for predatory journals ( n  = 47, 50.54%), PubMed for open access journals ( n  = 85, 85.86%), and MEDLINE for subscription-based journals ( n  = 39, 39%). About half of predatory journals ( n  = 48, 51.61%) and 65 (65.65%) open access journals mention DOAJ (indexed in or applied for indexing). International Committee of Medical Journal Editors (ICMJE) was mentioned in some capacity in 16 predatory journals and about three quarters of non-predatory journals.

Editors and editorial process

Nearly a quarter ( n  = 22, 23.66%) of predatory journals, 17 (17.17%) open access journals, and 9 (9%) subscription-based journals did not name an editor-in-chief (EIC) (Table  6 ). Of those that did, 40 (56.33%) predatory, 71 (86.59%) open access, and 57 (62.64%) subscription-based journals provided an institutional affiliation for the named EIC. An editorial board listing individual members was provided in 60 (64.52%) predatory journals, 92 (92.93%) open access journals, and 72 (72%) subscription-based journals, each comprising a median of 23 (IQR 14–37), 32.5 (22–50), and 27.5 (16.5–62) board members, respectively. If editors, journal staff, or editorial board members were identified, we completed a subjective assessment of the validity of three arbitrary names and the likelihood of their association with the journal by performing a Google search of their name (in quotations) and searching any online profiles for affiliation with the journal. Details of this assessment can be found in Table  6 . For journals with names of editors, staff, or board members available, 100% of names checked in subscription-based journals were found to be legitimate as well as in 95/98 (96.94%) open access journals. Only 24/90 (26.67%) named editors, staff, or board members were assessed as having a legitimate association with the journal among predatory journals. Almost 100% of non-predatory journals appear to use a manuscript submission system, whereas just over half of predatory journals use such a system; almost 70% of predatory journals request that authors send their manuscripts by email and 63% of those journals provide what appears to be a non-professional (e.g., Gmail, Yahoo) email address to do so. Almost all journals (95% predatory journals, 100% open access journals, 92% of subscription-based journals) indicate using peer review during publication consideration (Table  7 ).

Publication ethics and policies

We examined journals’ promotion and practices around publications ethics (Table  8 ). About three quarters ( n  = 77, 77.78%) of open access journals and about a third ( n  = 33, 33.00%) of subscription-based journals mentioned COPE somewhere on their website whereas only 13 predatory journals (13.98%) did. Few predatory journals had policies about retractions ( n  = 12, 12.90%), corrections/errata ( n  = 22, 23.66%), or plagiarism ( n  = 44, 47.31%) whereas more than half of all non-predatory journals had available policies for all three (retractions: n  = 112, 56.28%; corrections/errata: n  = 100, 50.25%; plagiarism: n  = 199, 59.80%). Sixty-two subscription-based (62%), 56 open access (56.57%), and only 6 predatory (6.45%) journals suggested, recommended or required study registration. No predatory journals mentioned the Enhancing the Quality and Transparency of health Research (EQUATOR) Network, whereas about a quarter (49/195) of presumed legitimate journals did so.

Publication model, fees, and copyright

We assessed whether journals made any indication about accessibility, fees, and copyright (Table  9 ). Forty-two (42.00%) subscription-based journals indicated being partially open access in some capacity (e.g., hybrid or delayed access), with the remainder not mentioning open access. Almost all ( n  = 95, 95.00%) subscription-based journals indicated that there was a subscription charge. Eighty-three potential predatory (89.25%) and 94 open access (94.95%) journals claimed to be open access (presumed to be full, immediate open access as no qualification regarding partial or delayed access was stated). For the five (5.05%) open access journals that did not specifically indicate being open access, all had content that was free to access (we did not investigate this further). Subscription-based journals and open access journals seemed to collect revenue from a range of sources (Table  9 ), while predatory journals appeared to mainly collect revenues from APCs ( n  = 73, 78.49%) and to a lesser extent, subscription fees ( n  = 13, 13.98); in 14 predatory journals (15.05%), no sources of revenue (including an APC) could be found. Of journals listing an APC, the median fee (USD) was $100 ($63–$150) in predatory journals ( n  = 59), $1866 ($800–$2205) in open access journals ( n  = 70), and $3000 ($2500–$3000) in subscription-based hybrid journals ( n  = 44). Almost 90% of all journals indicated which party retained copyright of published work. Explicit statements that authors retained copyright were present in 68.09% ( n  = 64) of open access journals, 36.78% ( n 2 = 32) of the time in subscription-based journals, and in only 12% ( n  = 9) of predatory journals.

This study demonstrates that our sample of potential predatory journals is distinct in some key areas from presumed legitimate journals and provides evidence of how they differ. While criteria have been proposed previously to characterize potential predatory journals [ 7 ], measuring each journal against a long list of criteria is not practical for the average researcher. It can be time consuming and some criteria are not straightforward to apply, as we have learned during this study. For instance, whether or not the listed editors of a journal are real people or have real affiliations with a journal is quite subjective to assess. Another example pertains to preservation and permanent access to electronic journal content. We found that not all presumed legitimate journals made explicit statements about this; however, we know that in order to be indexed in MEDLINE, a journal must “ Have an acceptable arrangement for permanent preservation of, and access to, the content ” [ 17 ].

From our findings, we have developed a list of evidence-based, salient features of suspected predatory journals (Table  10 ) that are straightforward to assess; we describe them further below. We recognize that these criteria are likely not sensitive enough to detect all potentially illegitimate, predatory journals. However, we feel they are a good starting point.

Non-biomedical scope of interest

We found that predatory journals tend to indicate interest in publishing research that was both biomedical and non-biomedical (e.g., agriculture, geography, astrophysics) within their remit, presumably to avoid limiting submissions and increase potential revenues. While legitimate journals may do this periodically (we did not assess the scope of presumed legitimate biomedical journals), the topics usually have some relationship between them and represent a subgroup of a larger medical specialty (e.g., Law and Medicine). Authors should examine the scope and content (e.g., actual research) of the journals they intend to publish in to determine whether it is in line with what they plan to publish.

Spelling and grammar

The home page of a journal’s website may be a good initial indicator of their legitimacy. We found several homepage indicators that may be helpful in assessing a journal’s legitimacy and quality. The homepages of potential predatory journals’ websites contained at least 10 times more spelling and grammar errors than presumed legitimate journals. Such errors may be an artefact of foreign language translation into English, as the majority of predatory journals were based in countries where a non-English language is dominant. Further, legitimate publishers and journals may be more careful about such errors to maintain professionalism and a good reputation.

Fuzzy, distorted, or potentially unauthorized image

Potential predatory journals appeared to have images that were low-resolution (e.g., fuzzy around the edges) or distorted ‘knock-off’ versions of legitimate logos or images.

Language directed at authors

Another homepage check authors can do is to examine the actual written text to gauge the intended audience. We found that presumed legitimate journals appear to target readers with their language and content (e.g., highlighting new content), whereas potential predatory journals seem to target prospective authors by inviting submissions, promising rapid publication, and promoting different metrics (including the Index Copernicus Value).

Manuscript submission and editorial process/policies

Authors should be able to find information about what happens to their article after it is submitted. Potential predatory journals do not seem to provide much information about their operations compared to presumed legitimate journals. Furthermore, most potential predatory journals request that articles be submitted via email rather than a submission system (e.g., Editorial Manager, Scholar One), as presumed legitimate journals do. Typically, journals have requirements that must be met or checked by authors or the journal during submission (e.g., declaration of conflicts of interest, agreement that the manuscript adheres to authorship standards and other journal policies, plagiarism detection). When a manuscript is submitted via email, these checks are not automatic and may not ever occur. Authors should be cautious of publishing in journals that only take submissions via email and that do not appear to check manuscripts against journal policies as such journals are likely of low quality. In addition, the email address provided by a journal seems to be a good indicator of its legitimacy. Predatory journals seem to provide non-professional or non-academic email addresses such as from providers with non-secured servers like Gmail or Yahoo.

Very low APC and inappropriate copyright

Finally, authors should be cautious when the listed APC of a biomedical journal is under $150 USD. This is very low in comparison to presumed legitimate, fully open access biomedical journals for which the median APC is at least 18 times more. Hybrid subscription journals charge 30 times the amount of potential predatory journals to publish and make research openly accessible. It has been suggested that hybrid journals charge a higher fee in order to maintain their ‘prestige’ (e.g., journals can be more selective about their content based on who is willing to pay the high fee) [ 18 ]. On the contrary, extremely low APCs may simply be a way for potential predatory journals to attract as many submissions as possible in order to generate revenue and presumably to build their content and reputation. Evidently, the APC varies widely across journals, perhaps more than any other characteristic we measured. Journal APCs are constantly evolving and increasing requirements by funders to make research open access may have a drastic impact on APCs as we know them over the coming years.

Researchers should be trained on author responsibilities, including how to make decision about where to publish their research. Ideally, authors should start with a validated or ‘white’ list of acceptable journals. In addition to considering the items listed in Table  10 in their decision-making, tools to guide authors through the journal selection process have started to emerge, such as ThinkCheckSubmit ( http://thinkchecksubmit.org/ ). Recently, COPE, OASPA, DOAJ, and WAME produced principles of transparency against which, among other measures, DOAJ assesses journals in part, before they can be listed in the database ( https://doaj.org/bestpractice ). We also encourage researchers to examine all journals for quality and legitimacy using the characteristics in Table  10 when making a decision on where to submit their research. As the journal landscape changes, it is no longer sufficient for authors to make assumptions about the quality of journals based on arbitrary measures, such as perceived reputation, impact factor, or other metrics, particularly in an era where bogus metrics abound or legitimate ones are being imitated.

This study examined most of Beall’s criteria for identification of predatory publishers and journals together with items from the COPE and OASPA. While many of the characteristics we examined were useful to distinguish predatory journals from presumed legitimate journals, there were many that do not apply or that are not unique to predatory journals. For instance, defining criteria of predatory journals [ 4 ] suggest that no single individual is named as an editor and that such journals do not list an editorial board. We found that this was not the case in over two thirds of predatory journals and, in fact, a named EIC could not be identified for 26 (13.07%) of the presumed legitimate journals in our sample. Such non evidence-based criteria for defining journals may introduce confusion rather than clarity and distinction.

The existing designation of journals and publishers as predatory may be confusing for other reasons. For instance, more than one presumed-legitimate publisher has appeared on Beall’s list [ 19 ]. In October 2015, Frontiers Media, a well-known Lausanne-based open access publisher, appeared on Beall’s List [ 20 ]. Small, new, or under-resourced journals may appear to have the look and feel of a potential predatory journal because they do not have affiliations with large publishers or technologies (e.g., manuscript submission systems) or mature systems and the features of a legitimate journal. This is in line with our findings that journals from low-resourced (LMIC) countries were more often in the potentially predatory group of journals than either of the presumed-legitimate journal arms. However, this does not imply that they are necessarily predatory journals.

Another limitation is that the majority of the open access biomedical journals in our sample (95%) charged an APC, while generally many open access journals do not. May 2015 was the last time that the DOAJ provided complete information regarding APCs of journals that it indexes (fully open access, excluding delayed or partial open access). At that time, approximately 32% of journals charged an APC. At the time of writing this article, approximately 40% of medical journals in DOAJ appear to charge an APC. However, these figures do not account for the hybrid-subscription journals that have made accommodations in response to open access, many of which are included in our sample of subscription-based journals. For such journals, our data and that of others [ 21 ] show that their fees appear to be substantially higher than either potential predatory or fully open access journals.

In context of other research

To the best of our knowledge this is the first comparative study of predatory journal publishing and legitimate publishing models aimed at determining how they are different and similar. Previously, Shen and Björk [ 22 ] examined a sample of about 5% of journals listed on Beall’s List for a number of characteristics, including three that overlap with items for which we collected data: APC, country of publisher, and rapidity of (submission to) publishing [ 22 ]. In a large part, for the characteristics examined, our findings within the predatory journal group are very similar. For example, Shen and Björk [ 22 ] found the average APC for single publisher journals to be $98 USD, which is very similar to our results ($100 USD). They also found that 42% of single predatory journal publishers were located in India, whereas our estimates were closer to 62%. Differences between their study and ours may exist because we focused on biomedical journals while they included all subject areas.

Limitations

It was not possible to fully blind assessors to study groups since, given the expertise of team members, a minimum knowledge of non-predatory publishers was expected. In addition, we could only include items that could be assessed superficially rather than those requiring in-depth investigations for each journal. Many items can and should be investigated further.

Since some characteristics are likely purposely similar between journals (e.g., journals from all groups claim to be open access and indicate carrying out peer review) [ 14 ], and it was difficult to anticipate which, we did not carry out a logistic regression to determine whether characteristics were likely to be associated with predatory or presumed legitimate journals.

This research initiates the evidence-base illuminating the difference between major publishing models and, moreover, unique characteristics of potential predatory (or illegitimate) journals (Table  10 ).

The possibility that some journals are predatory is problematic for many stakeholders involved in research publication. Most researchers are not formally trained on publication skills and ethics, and as such may not be able to discern whether a journal is running legitimate operations or not. For early career researchers or for those who are unaware of the existence or characteristics of predatory journals, they can be difficult to distinguish from legitimate journals. However, this study indicates that predatory journals are offering at least 18-fold lower APCs than non-predatory journals, which may be attractive to uninformed authors and those with limited fiscal resources. Assuming that each journal publishes 100 articles annually, the revenues across all predatory journals would amount to at least a $USD 100 million dollar enterprise. This is a substantial amount of money being forfeited by authors, and potentially by funders and institutions, for publications that have not received legitimate professional editorial and publishing services, including indexing in databases.

Established researchers should beware of predatory journals as well. There are numerous anecdotes about researchers (even deceased researchers [ 23 ]) who have been put on a journal’s editorial board or named as an editor, who did not wish to be and who were unable to get their names delisted [ 24 ]. Aside from this potentially compromising the reputation of an individual that finds him or herself on the board, their affiliation with a potential predatory journal may confer legitimacy to the journal that is not deserved and that has the potential to confuse a naïve reader or author. As our findings indicate, this phenomenon appears to be a clear feature of predatory journals.

In addition to the costs and potential fiscal waste on publication in predatory journals, these journals do not appear to be indexed in appropriate databases to enable future researchers and other readers to consistently identify and access the research published within them. The majority of predatory journals indicated being ‘indexed’ in Google Scholar, which is not an indexing database. Google does not search pre-selected journals (as is the case with databases such as Medline, Web of Science, and Scopus), rather it searches the Internet for scholarly content. Some potentially predatory journals indicate being indexed in well-known biomedical databases; however, we have not verified the truthfulness of these claims by checking the databases. Nonetheless, if legitimate clinical research is being published in predatory journals and cannot be discovered, this is wasteful [ 25 ], in particular when it may impact systematic reviews. Equally, if non-peer reviewed, low quality research in predatory journals is discovered and included in a systematic review, it may pollute the scientific record. In biomedicine, this may have detrimental outcomes on patient care.

Future research

What is contained (i.e., ‘published’) within potential predatory journals is still unclear. To date, there has not been a large-scale evaluation of the content of predatory journals to determine whether research is being published, what types of studies predominate, and whether or not data (if any) are legitimate. In addition, we have little understanding of who is publishing in predatory journals (i.e., experience of author, geographic location, etc.) and why. Presumably, the low APC is an attractive feature; however, whether or not authors are intentionally or unintentionally publishing within these journals is critical to understanding the publishing landscape and anticipate future potential directions and considerations.

The findings presented here can facilitate education on how to differentiate between presumed legitimate journals and potential predatory journals.

Abbreviations

Abridged Index Medicus

article processing charge

CONsolidated Standards Of Reporting Trials

Committee On Publication Ethics

Directory Of Open Access Journals

editor-in-chief

Enhancing the QUAlity and Transparency of health Research

international standard serial number

journal impact factor

low- or middle-income country

Open Access Scholarly Publishers Association

Public Library Of Science

Preferred Reporting Items for Systematic reviews and Meta-Analyses

STAndards for Reporting Diagnostic accuracy

STrengthening the Reporting of OBservational studies in Epidemiology

United States Dollar

Bethesda Statement on Open Access Publishing. http://legacy.earlham.edu/~peters/fos/bethesda.htm . Accessed 31 Mar 2016.

Morrison H, Salhab J, Calve-Genest A, Horava T. Open access article processing charges: DOAJ survey May 2014. Publications. 2015;3:1–16.

Article   Google Scholar  

Crotty D. Is it True that Most Open Access Journals Do Not Charge an APC? Sort of. It Depends. The Scholarly Kitchen on WordPress.com. The Scholarly Kitchen. 2015. http://scholarlykitchen.sspnet.org/2015/08/26/do-most-oa-journals-not-charge-an-apc-sort-of-it-depends/ . 4 Apr 2016.

Beall J. Beall’s List: Potential, Possible, or Probable Predatory Scholarly Open-Access Publishers. 2015. http://scholarlyoa.com/publishers/ . Accessed 7 Jan 2016.

Google Scholar  

Beall J. Criteria for Determining Predatory Open-Access Publishers. 1st ed. 2012.  http://wp.me/p280Ch-g5 . Accessed 1 Apr 2014.

Beall J. Criteria for Determining Predatory Open-Access Publishers. 2nd ed. 2012.  http://scholarlyoa.com/2012/11/30/criteria-for-determining-predatory-open-access-publishers-2nd-edition/ . Accessed 5 July 2016.

Beall J. Criteria for Determining Predatory Open-Access Publishers. 3rd ed. 2015.  https://scholarlyoa.files.wordpress.com/2015/01/criteria-2015.pdf . Accessed 1 July 2016.

Dadkhah M, Bianciardi G. Ranking predatory journals: solve the problem instead of removing it! Adv Pharm Bull. 2016;6(1):1–4.

Article   PubMed   PubMed Central   Google Scholar  

Glick M, Springer MD, Bohannon J, Bohannon J, Shen C, Björk B-C. Publish and perish. J Am Dent Assoc Elsevier. 2016;147(6):385–7.

Clark J, Smith R. Firm action needed on predatory journals. BMJ. 2015;350(jan16_1):h210.

Article   PubMed   Google Scholar  

Caplan AL, Bohannon J, Beall J, Sarwar U, Nicolaou M, Balon R, et al. The problem of publication-pollution denialism. Mayo Clin Proc. 2015;90(5):565–6.

Eisen M. Door-to-Door Subscription Scams: The Dark Side of The New York Times. It is NOT Junk. 2013. http://www.michaeleisen.org/blog/?p=1354 . Accessed 22 Jun 2016

Moher D, Srivastava A. You are invited to submit…. BMC Med. 2015;13:180.

Bohannon J. Who’s afraid of peer review. Science. 2013;342:60–5.

Article   CAS   PubMed   Google Scholar  

Fact SheetMEDLINE® Journal Selection. U.S. National Library of Medicine. https://www.nlm.nih.gov/pubs/factsheets/jsel.html . Accessed Apr 2014.

Selection C. Abridged Index Medicus. N Engl J Med. 1970;282(4):220–1.

U.S. National Library of Medicine. MEDLINE Policy on Indexing Electronic Journals. https://www.nlm.nih.gov/bsd/policy/ejournals.html . Accessed 26 Jul 2016.

Van Noorden R. Open access: the true cost of science publishing. Nature. 2013;495(7442):426–9.

Butler D. Investigating journals: the dark side of publishing. Nature. 2013;495(7442):433–5.

Bloudoff-Indelicato M. Backlash after Frontiers journals added to list of questionable publishers. Nature. 2015;526(7575):613.

Article   CAS   Google Scholar  

Wellcome Trust. Wellcome Trust and COAF Open Access Spend, 2014-15. Wellcome Trust blog. 2015. https://blog.wellcome.ac.uk/2016/03/23/wellcome-trust-and-coaf-open-access-spend-2014-15/ . Accessed 21 Jun 2016

Shen C, Bjork B-C. ‘Predatory’ open access: a longitudinal study of article volumes and market characteristics. BMC Med. 2015;13:230.

Spears T. The Editor is Deceased: Fake Science Journals Hit New Low. Ottawa: Ottawa Citizen; 2016.

Kolata G. For Scientists, an Exploding World of Pseudo-Academia. NYTimes.com. 2016. http://www.nytimes.com/2013/04/08/health/for-scientists-an-exploding-world-of-pseudo-academia.html?pagewanted=all . Accessed July 2016.

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gøtzsche PC, et al. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

Download references

No funding was received for this project.

Availability of data and materials

The screening and data extraction forms used, and the data generated, in this study are available from the authors on request.

Authors’ contributions

DM and LS conceived of this project and drafted the protocol, with revisions by VB. RB, JC, JG, OM, DM, JR, LS, BJS, and LT were involved in the conduct of this project. LS and LT performed analysis of data. LS drafted the manuscript. All authors provided feedback on this manuscript and approved the final version for publication.

Competing interests

VB is the Chair of COPE and the Executive Director of the Australasian Open Access Strategy Group.

Consent for publication

Not applicable.

Ethics approval and consent to participate

Transparency declaration.

David Moher affirms that this manuscript is an honest, accurate, and transparent account of the study being reported, that no important aspects of the study have been omitted, and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

Author information

Authors and affiliations.

Centre for Journalology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, K1H 8L6, Canada

Larissa Shamseer, David Moher & James Galipeau

School of Epidemiology, Public Health and Preventative Medicine, University of Ottawa, Ottawa, K1H 8M5, Canada

Larissa Shamseer & David Moher

School of Medicine, Dentistry and Biomedical Sciences, Queen’s University Belfast, Belfast, BT9 7BL, UK

Onyi Maduekwe

Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, K1H 8L6, Canada

Lucy Turner

Office of Research Ethics and Integrity, Queensland University of Technology (QUT), Brisbane, QLD 4000, Australia

Virginia Barbour

Brigham and Women’s Hospital, Harvard Medical School, Boston, 02115, USA

Rebecca Burch

icddr,b, Dhaka, 1000, Bangladesh

Jocalyn Clark

Origin Editorial, Plymouth, MA, 02360, USA

Jason Roberts

Knowledge Synthesis Group, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, K1H 8L6, Canada

Beverley J. Shea

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Larissa Shamseer .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Shamseer, L., Moher, D., Maduekwe, O. et al. Potential predatory and legitimate biomedical journals: can you tell the difference? A cross-sectional comparison. BMC Med 15 , 28 (2017). https://doi.org/10.1186/s12916-017-0785-9

Download citation

Received : 11 November 2016

Accepted : 09 January 2017

Published : 16 March 2017

DOI : https://doi.org/10.1186/s12916-017-0785-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scientific publishing
  • Publishing models
  • Biomedical journal
  • Journalology

BMC Medicine

ISSN: 1741-7015

international journal of current science research and review predatory

Beall's List

of Potential Predatory Journals and Publishers

​Potential predatory scholarly open‑access publishers

Instructions : first, find the journal’s publisher – it is usually written at the bottom of the journal’s webpage or in the “About” section. Then simply enter the publisher’s name or its URL in the search box above. If the journal does not have a publisher use the  Standalone Journals  list. All journals published by a predatory publisher are potentially predatory unless stated otherwise.

Excluded – decide after reading

  • Multidisciplinary Digital Publishing Institute (MDPI)  – I decided not to include MDPI on the list itself. However, I would urge anyone that wants to publish with this publisher to thoroughly  read this wiki article detailing their possible ethical/publishing problems, and a recent article discussing their growth.

Useful pages

​List of journals falsely claiming to be indexed by DOAJ

DOAJ: Journals added and removed

Nonrecommended medical periodicals

Retraction Watch

Flaky Academic Journals Blog

List of scholarly publishing stings​

Conferences

Questionable conferences [ archive ]

How to avoid predatory conferences

Flaky Academic Conferences Blog

Evaluating journals

Journal Evaluation Tool

JCR Master Journal List

DOAJ Journal Search

Think Check Submit

Original description by J. Beall

This is a list of questionable, scholarly open-access publishers. We recommend that scholars read the available reviews, assessments and descriptions provided here, and then decide for themselves whether they want to submit articles, serve as editors or on editorial boards. In a few cases, non-open access publishers whose practices match those of predatory publishers have been added to the list as well. The criteria for determining predatory publishers are  here .​ We hope that tenure and promotion committees can also decide for themselves how importantly or not to rate articles published in these journals in the context of their own institutional standards and/or geocultural locus.  We emphasize that journal publishers and journals change in their business and editorial practices over time. This list is kept up-to-date to the best extent possible but may not reflect sudden, unreported, or unknown enhancements.

Rising number of ‘predatory’ academic journals undermines research and public trust in scholarship

international journal of current science research and review predatory

Professor of Journalism and Chair, Knight Center for Environmental Journalism, Michigan State University

international journal of current science research and review predatory

Associate Professor of Media, KIMEP University

Disclosure statement

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Michigan State University provides funding as a founding partner of The Conversation US.

View all partners

  • Bahasa Indonesia

Four wooden blocks spell the words "Fake" and "Fact" as they turn.

Taxpayers fund a lot of university research in the U.S., and these findings published in scholarly journals often produce major breakthroughs in medicine, vehicle safety, food safety, criminal justice, human rights and other topics that benefit the public at large.

The bar for publishing in a scholarly journal is often high. Independent experts diligently review and comment on submitted research – without knowing the names of the authors or their affiliated universities. They recommend whether a journal should accept an article or revise or reject it. The piece is then carefully edited before it is published.

But in a growing number of cases , these standards are not being upheld.

Some journals charge academics to publish their research – without first editing or scrutinizing the work with any ethical or editorial standards. These for-profit publications are often known as predatory journals because they are publications that claim to be legitimate scholarly journals but prey on unsuspecting academics to pay to publish and often misrepresent their publishing practices.

There were an estimated 996 publishers that published over 11,800 predatory journals in 2015. That is roughly the same number of legitimate, open-access academic journals – available to readers without charge and archived in a library supported by a government or academic institution – published around the same time. In 2021, another estimate said there were 15,000 predatory journals .

This trend could weaken public confidence in the validity of research on everything from health and agriculture to economics and journalism.

We are scholars of journalism and media ethics who see the negative effects predatory publishing is having on our own fields of journalism and mass communication. We believe it is important for people to understand how this problem affects society more broadly.

In most cases, the research published in these journals is mundane and does not get cited by other academics. But in other cases, poorly executed research – often on science – could mislead scientists and produce untrue findings.

A graphic explains in writing how to submit to a journal and explains what predatory journals look like.

Misleading practices

Publishing in journals is considered an essential part of being an academic because professors’ responsibilities generally include contributing new knowledge and ways of solving problems in their research fields. Publishing research is often a key part of academics keeping their jobs, getting promoted or receiving tenure – in an old phrase from academia, you publish or perish.

Predatory publishers often use deception to get scholars to submit their work. That includes false promises of peer review, which is a process that involves independent experts scrutinizing research. Other tactics include lack of transparency about charging authors to publish their research.

While fees vary, one publisher told us during our research that its going rate is $60 per printed page. An author reported paying $250 to publish in that same outlet. In contrast, legitimate journals charge a very small amount, or no fee at all, to publish manuscripts after editors and other independent experts closely review the work.

These kinds of journals – about 82.3% of which are located in poor countries, including India, Nigeria and Pakistan – can prey on junior faculty who are under intense pressure from their universities to publish research.

Low-paid young faculty and doctoral students, who may have limited English language proficiency and poor research and writing skills, are also especially vulnerable to publishers’ aggressive marketing, mostly via email.

Authors who publish in fraudulent journals may add these articles to their resumes, but such articles are rarely read and cited by other scholars, as is the norm with articles in legitimate journals. In some instances , articles are never published, despite payment.

Predatory publishers may also have an unusually large breadth of topics they cover. For example, we examined one Singapore-based company called PiscoMed Publishing, which boasts 86 journals in fields spanning religious studies and Chinese medicine to pharmacy and biochemistry. Nonpredatory publishers tend to be more focused in the breadth of their topics.

The Conversation contacted all of the journals named in this article for comment and did not receive a response regarding their work standards and ethics.

Another journal, the International Journal of Humanities and Social Science , says it publishes in about 40 fields, including criminology, business, international relations, linguistics, law, music, anthropology and ethics. We received an email from this journal, signed by its chief editor, who is listed as being affiliated with a U.S. university.

But when we called this university, we were told that the school does not employ anyone with that name. Another person at the school’s Art Department said that the editor in question no longer works there.

It is extremely difficult for people reading a study, or watching a news segment about a particular study, to recognize that it appeared in a predatory journal.

In some instances, these journals’ titles are almost identical to titles of authentic ones or have generic names like “Academic Sciences” and “BioMed Press.”

Scholars deceived

In a 2021 study , we surveyed and interviewed scholars in North America, Africa, Asia, Australia and Europe listed as editorial board members or reviewers for two predatory journalism and mass communication journals.

One company, David Publishing , gives a Delaware shipping and mailbox store as its address and uses a Southern California phone number. It says it publishes 52 journals in 36 disciplines, including philosophy, sports science and tourism.

Some scholars told us they were listed as authors in these journals without permission. One name still appeared as an author several years after the scholar’s death.

Our latest, forthcoming study conducted in 2023 surveyed and interviewed a sample of authors of 504 articles in one of those predatory journals focused on journalism and mass communication.

We wanted to learn why these authors – ranging from graduate students to tenured full professors – chose to submit their work to this journal and what their experience was like.

While most authors come from poor countries or other places such as Turkey and China, others listed affiliations with top American, Canadian and European universities.

Many people we contacted were unaware of the journal’s predatory character. One author told us of learning about the journal’s questionable practices only after reading an online posting that “warned people not to pay.”

A lack of concern

Some people we spoke with didn’t express concern about the ethical implications of publishing in a predatory journal, including dishonesty with authors’ peers and universities and potential deception of research funders. We have found that some authors invite colleagues to help pay the fees in exchange for putting their names on an article, even if they did none of the research or writing.

In fact, we heard many reasons for publishing in such journals.

These included long waits for peer review and high rejection rates from reputable journals .

In other cases, academics said that their universities were more concerned with how much they publish, rather than the quality of the publication that features their work.

“It was very important for me to have it at that time. I never paid again. But I got my promotion. It was recognized by my institution as a full publication. I profited … and it did the job,” one author from the Middle East told us in an interview.

Why it matters

Predatory publishing creates a major obstacle in the drive to ensure that new research on critical topics is well-founded and truthful.

This can have implications in health and medical research, among other areas. As one health care scholar explained , there is a risk that scientists could incorporate erroneous findings into their clinical practices.

High standards are crucial across all areas of research. Policymakers, governments, educators, students, journalists and others should be able to rely on credible and accurate research findings in their decision making, without constantly double-checking the validity of a source that falsely purports to be reputable.

  • Peer review
  • Academic research
  • Predatory journals
  • Misinformation
  • Science journals
  • Scholarship

international journal of current science research and review predatory

Faculty of Law - Academic Appointment Opportunities

international journal of current science research and review predatory

Operations Manager

international journal of current science research and review predatory

Senior Education Technologist

international journal of current science research and review predatory

Audience Development Coordinator (fixed-term maternity cover)

international journal of current science research and review predatory

Lecturer (Hindi-Urdu)

Advertisement

Issue Cover

  • Previous Article
  • Next Article

PEER REVIEW

1. introduction, 2. materials and methods, 3. findings, 4. discussion and conclusion, 5. limitations, acknowledgments, author contributions, competing interests, funding information, data availability, are papers published in predatory journals worthless a geopolitical dimension revealed by content-based analysis of citations.

ORCID logo

Handling Editor: Ludo Waltman

  • Funder(s):  Narodowe Centrum Nauki
  • Award Id(s): UMO-2017/26/E/HS2/00019
  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Zehra Taşkın , Franciszek Krawczyk , Emanuel Kulczycki; Are papers published in predatory journals worthless? A geopolitical dimension revealed by content-based analysis of citations. Quantitative Science Studies 2023; 4 (1): 44–67. doi: https://doi.org/10.1162/qss_a_00242

Download citation file:

  • Ris (Zotero)
  • Reference Manager

This study uses content-based citation analysis to move beyond the simplified classification of predatory journals. We present that, when we analyze papers not only in terms of the quantity of their citations but also the content of these citations, we are able to show the various roles played by papers published in journals accused of being predatory. To accomplish this, we analyzed the content of 9,995 citances (i.e., citation sentences) from 6,706 papers indexed in the Web of Science Core Collection, which cites papers published in so-called “predatory” (or questionable) journals. The analysis revealed that the vast majority of such citances are neutral (97.3%), and negative citations of articles published in the analyzed journals are almost completely nonexistent (0.8%). Moreover, the analysis revealed that the most frequently mentioned countries in the citances are India, Pakistan, and Iran, with mentions of Western countries being rare. This highlights a geopolitical bias and shows the usefulness of looking at such journals as mislocated centers of scholarly communication. The analyzed journals provide regional data prevalent for mainstream scholarly discussions, and the idea of predatory publishing hides geopolitical inequalities in global scholarly publishing. Our findings also contribute to the further development of content-based citation analysis.

https://www.webofscience.com/api/gateway/wos/peer-review/10.1162/qss_a_00242

The term predatory journals hides complex geopolitical inequalities, various motivations for scholarly publishing, and the local contexts in which these journals proliferate ( Krawczyk & Kulczycki, 2021b ). Similarly, the practice of citation-counting hides the role played by a given citation in developing the argument of a paper and the motivation for citing. In this study, we argue that the hidden phenomena are strongly related and that revealing this relation might deepen the understanding of transformations currently taking place in academia affecting scholarly communication. We prefer using the term questionable journals instead of predatory journals , as we argued in our previous study ( Kulczycki, Hołowiecki et al., 2021 ), because the former term does not imply a predatory intent of the publisher.

Previous studies that counted the number of citations referring to articles in questionable journals ( Frandsen, 2017 ; Moussa, 2021 ) have been unable to show the more complex nature of the phenomenon described as predatory publishing due to limitations of the method—that is, citation-counting. This study goes beyond this limitation and aims to examine the content of citations referring to questionable journals in journals that are widely accepted as legitimate (i.e., indexed in Web of Science Core Collection [WoS]).

The main research question is twofold. First, as a follow-up study to the previous study that examined the number of articles in questionable journals that are cited in WoS-indexed journals ( Kulczycki et al., 2021 ), we investigate the context of citations of questionable journals in legitimate journals. Second, we address the question of whether the content of citances (i.e., citation sentences) is specific to peripheral or semiperipheral countries (i.e., refers to local affairs). In terms of knowledge production, we understand there is a strongly one-sided influence of knowledge produced in the center compared to knowledge production in peripheries. Moreover, we reflect on what it could mean that questionable journals take on the role of mislocated centers of scholarly communication , which is the term we coined to describe and criticize the role of some publication channels in peripheral or semiperipheral countries without condemning scholars who publish in them or accusing publishers of bad intentions ( Krawczyk & Kulczycki, 2021b ).

1.1. Questioning the Concept of Predatory Journals

Over the past decade, predatory publishing has been one of the most discussed topics not only in the science of science but also among policymakers. Since Jeffrey Beall (2012) created the first list of so-called predatory journals in 2012, many papers have warned against such publication channels, as well as against predatory conferences or fake metrics ( Krawczyk & Kulczycki, 2021a ). The term predatory journals coined by Beall refers to journals that dishonestly use the open-access model and deceive scholars in favor of their own financial interests. Beall (2018) also argued that, because of predatory journals, pseudoscientific articles can leak into mainstream scholarly literature. Grudniewicz, Moher et al.’s (2019) recent definition does not link predatory publishing to the open access concept nor does it focus on the review process, as the authors consider it difficult to assess. They highlighted that such journals prioritize their self-interest at the expense of scholarship and are characterized by false or misleading information, poor editorial practices, and a lack of transparency. However, with definitions focusing on the journals, the quality of the articles in these predatory journals is not often considered. When citations referring to predatory journals are considered, a primary suggested solution has been that researchers simply stop citing such journals altogether ( Oermann, Nicoll et al., 2020 ).

The term predatory journals is a simple label for complex and multidimensional practices in scholarly communication. The debate over predatory publishing focuses almost entirely on journals published in English in non-English-speaking countries ( Eykens, Guns et al., 2019 ; Grudniewicz et al., 2019 ; Moussa, 2021 ). Various lists of predatory journals, such as the discontinued Beall’s List or the more complex and transparent Cabell’s Predatory Reports, are perceived as useful tools for indicating undesirable journals; however, they provide a relatively simplistic point of view: “Good” journals are published mostly in central countries in English while “bad” journals are published mostly in (semi)peripheral countries in English. Such a dichotomy is not valid: There are many bad journals with aggressive business models in central countries and many good journals published in English and, primarily, local languages in (semi)peripheral countries.

Moreover, many editorially reputable journals from large commercial publishers possess business models that could be accused of being predatory or questionable ( Siler, 2020 ). The term predatory journal evokes many negative connotations; however, researchers from peripheral or semiperipheral countries often publish in such journals because they are counted in research evaluation regimes in semiperipheral countries ( Rochmyaningsih, 2019 ; Teixeira da Silva, Moradzadeh et al., 2022 ). Previous studies have revealed that peripheral or semiperipheral countries (sometimes called developing countries ), such as India, Iran, and Turkey, are more profoundly affected by predatory practices than central countries (especially the United States and Western Europe; Demir, 2018 ; Eve & Priego, 2017 ; Kulczycki, Hołowiecki et al., 2022 ). As described in our previous paper ( Krawczyk & Kulczycki, 2021b ), if a journal starts to be viewed as prestigious in semiperipheral countries (e.g., when it is indexed in Scopus) while it is still seen as questionable in central countries, it becomes a mislocated center of scholarly communication.

In our daily work as researchers and policy advisors, we observe that many scholars and policymakers assume that all articles published in questionable journals could not be published elsewhere and thus are of low quality. With this in mind, in this study, we aim to address the question of whether one can transfer the assessment of a journal (i.e., as a questionable channel of communication) to the evaluation of a single article published in the journal. The results show that going beyond citation-counting allows us to reveal the more complex phenomena behind the simplified notion of a predatory journal and undermines possible assumptions regarding the predation of articles.

1.2. Beyond Simply Counting: Content-Based Citation Analysis

Researchers are expected by their institutions and policymakers at various national and global levels to publish in journals with high impact factors and receive a large number of citations of these publications. Although it has been reported in many studies that this research evaluation practice is problematic ( Hicks, Wouters et al., 2015 ; “Read the Declaration,” n.d. ; Wilsdon, Allen et al., 2015 ), throughout most of the world, academic success is still determined according to these criteria. However, a system based solely on citation quantity faces various significant challenges. For example, policymakers attempt to draw a clear line between “good” and “bad” journals by using journal impact factors as an indicator of journal quality. This creates a scholarly environment in which articles in high-impact-factor journals are considered legitimate or of good quality, while articles in questionable journals are deemed worthless. Interestingly, our previous study revealed that questionable journals are often cited by legitimate ones ( Kulczycki et al., 2021 ). This questions the meaning of citation quantity and reveals new methods for differentiating citations in terms of their content.

Our argument regarding questionable publishing is similar to the more nuanced approaches to predatory publishing, such as the campaign “Think. Check. Submit,” which does not as substantially rely on lists of good and bad journals ( www.thinkchecksubmit.org ). Additionally, Cabell’s Predatory Reports attempts to evaluate journals based on several categories, such as whether they provide misleading information, send spam, or have a website that seems too focused on collecting publishing fees ( Siler, 2020 ). In this regard, content-based citation analysis can help provide new understanding about these journals as well as the papers published in them, which is important as valuable papers can sometimes be published in journals with questionable publishing practices, causing them to be overlooked.

Content-based citation analysis is not a new approach to citation analysis. Most citation analysis studies, assuming that not all citations are equal, have started with the questions “Why do authors cite?” or “What are the motivations of authors to cite?” ( Bonzi & Snyder, 1991 ; Brooks, 1986 ; Cano, 1989 ; Chubin & Moitra, 1975 ; Cronin, 1981 ; Garfield, 1970 ). Various classification schemes have been developed to date, and citations have been classified according to these schemes using natural language processing tools and machine learning techniques ( Iqbal, Hassan et al., 2021 ). Content-based citation analysis methods manually carried out on small samples in older studies have gained momentum today with the diversification of computerized processing methods and the increase in access to full scientific texts, as predicted by Teufel (1999) .

The first results of content-based analysis in practice have already started to be reported. A deep learning tool called scite was launched for classifying citation contexts ( Nicholson, Mordaunt et al., 2021 ). Scite obtains documents, mines citation contexts, matches references, and classifies citations in terms of their meanings (supporting, contrasting, and mentioning citations). In addition to scite , WoS added a new service to its citation indexes called Enriched Cited References ( Clarivate, 2021 ), which provides information regarding the location of citations in the text in terms of the Introduction, Methodology, Results, and Discussion (IMRaD) structure and their purpose (support, differ, basis, background, and discuss). Two of the services use citation extraction, the mining of full texts, and automatic classification. These developments show that the future of citation analysis has started to be reshaped by content-based citation analysis systems. Our study contributes not only to the content-based citation analysis literature by providing a new corpus but also defines some of its present challenges and proposes solutions.

In this paper, we investigate citances in WoS-indexed articles referring to questionable journal articles to understand the contexts of the citations. A citance is a neology created by Nakov, Schwartz, and Hearst (2004) to define the sentence(s) surrounding the citation within a document. To achieve the aim of this study, all cited and citing articles and their metadata were downloaded as PDFs and stored in a MySQL database. Inaccessible articles ( n = 44) were removed from the data set. A description of the data set is shown in Figure 1 .

Descriptive statistics of the data set.

Descriptive statistics of the data set.

2.2. Classification Scheme

To understand and classify the content of citances, we conducted content-based analysis. For the effective content-based analysis of citations, both supervised and unsupervised methods have been suggested in the literature ( Athar, 2011 ; Taşkın & Al, 2018 ). In this study, to classify citations in terms of their content, we chose expert tagging. The citation classification scheme developed by Taşkın and Al (2018) was used in the expert tagging process (see Figure 2 ).

Four classes of the classification scheme for citances. An example of each citation class is presented in Appendix A.

Four classes of the classification scheme for citances. An example of each citation class is presented in   Appendix A .

In the tagging process, all citations are classified according to four main categories: meaning, purpose, shape, and array. The meaning class defines the authors’ interpretation of the work they have cited (positive, negative, both, or neutral). In the purpose class, the classification is made by considering the author’s objective for the citation, such as providing literature examples, giving a definition, using a methodology, or validating research results or data. Citations are sometimes accompanied by the author’s name or direct quotes from their work; additionally, at times, there may be many works cited within one sentence. These factors are assessed by the shape class. Finally, the array class is used to understand in which sections, how many times, and in how many different sections each study is cited.

2.3. Collection of Citances and Tagging Process

For the expert tagging procedure, a database was created with a custom tagging interface written in the PHP and JavaScript programming languages (see Figure 3 ). To provide an accurate tagging process, 20 citances were tagged by all the authors (referred to in the document as taggers ) before they began the tagging process; then, the results were discussed. We called this process calibration . The main aim of calibration was to develop a common understanding among all taggers. Then, all citances were tagged by the authors.

Tagging interface.

Tagging interface.

The tagger opens the PDFs of the citing and cited papers.

The tagger finds citations using author names or titles and copies the citances (which can be one sentence or more). Taggers must check the whole paragraph and decide which parts refer to the citing paper).

If the reference is not mentioned in the text, the tagger selects the “Not Cited” (yellow) option. When this option is selected, the other dropdown menus are deactivated.

If the citance is not written in English or other languages in which the taggers are fluent, the taggers use Google Translate. Then, they tag the citance using the translated version.

The tagger chooses the citation classes.

If the language of a questionable paper is not English, the tagger writes language information about the paper.

To make desired changes for the tagged citances, the tagger can use the editing screen, which is shown in Figure 3(b) .

After the tagging process, all citations were classified by a tagger regarding the four main citation classes. However, the need for validation of citations arose for the Meaning class, which is based on the interpretation of the taggers and is relatively more subjective. To meet this need, all positive, negative, positive/negative citations, and 273 randomly selected neutral citations tagged by a tagger were retagged by all three taggers. The interannotator agreement scores are presented in detail in Section 5 .

2.4. Visualizations, Analysis, and Statistical Tests

We conducted a content analysis of the citances by counting the word frequency. After identifying country names in the citances, we used VOSviewer to analyze the occurrences of keywords. All keywords were unified and standardized before the analyses were performed. The full counting method was chosen for the visualization and content analysis of the citances. In Figure 7 , the co-occurrences of keywords of 1,474 citances with country names that appeared at least 10 times are shown. Country self-citations, which are provided in Table 2 , indicate the number of citances covering country names made by authors affiliated with an institution from the same country. Only corresponding authors are considered.

The findings of the study are presented in two sections. First, we describe the general characteristics of the analyzed citances by reporting the number of citations per text, the sections in which the citations were made, and the purpose and shape of the citations. In the second section, we describe the content analysis of the citances, which revealed non-Western countries were the most frequently mentioned countries. Via the analysis of the word co-occurrences, we describe the contexts in which these countries were mentioned.

3.1. General Characteristics of Citances

3.1.1. descriptive statistics.

We examined the full texts of 3,221 questionable articles and their 6,706 WoS-indexed citers. We tagged the citances by conducting 10,283 transactions in the tagging process. However, 288 citations of legitimate articles (2.8%) were not referred to or mentioned in the article bodies despite being listed in the reference sections. Seventy-eight per cent of the missing citations were published in articles indexed in the Emerging Sources Citation Index (ESCI), 22% in Journal Citation Reports (JCR), and only one journal was indexed in the Arts & Humanities Citation Index (AHCI). After removing the noncited references, we had 9,995 citances in the initial data set. The descriptive statistics for the cited and citing papers are shown in Table 1 .

Distribution of the number of citances referring to questionable articles in legitimate articles

Table 1 shows that 67% of the legitimate articles cited questionable articles one time in the text, and 90% of the legitimate articles cited questionable articles one to three times. These statistics indicate the need to understand the author motivations behind citations, a task for which the content-based analysis of citances can help.

3.1.2. Sections and purposes of citations

According to the results, 66.9% of the citances were found in the introduction section, followed by the discussion (17.1%) and findings (9%) sections (see Figure 4 ). The distribution of the citances in the IMRaD categories differed from one of our previous studies. Taşkın and Al (2018) found 85% of citances in the introduction section in the Turkish library and information science literature. However, studies in the literature have suggested that citations in the methodology, findings, or discussion sections are more important than citations in the introduction section ( Maričić, Spaventi et al., 1998 ; Voos & Dagaev, 1976 ). Therefore, it is important to investigate citances in different sections.

Distribution of citances in the purpose and array classes.

Distribution of citances in the purpose and array classes.

When the purpose of the citances in each section was investigated in this study, we found that 90% of citances in the introduction section were literature citations. However, the distribution of the classes in the other sections was quite different compared to that in the introduction section. For example, unsurprisingly, almost 70% of citances in the methodology section were intended to explain the methods of the study. Overall, the purpose class of citances differed according to the IMRaD sections ( χ 2 (10) = 8227.559, p < 0.001, V = 0.454).

It should be noted that more than 20% of citances in the findings section and almost 50% of citations in the discussion section had the objective of comparing or validating the articles (i.e., with similar studies in the literature). This finding can open up a new discussion for future studies regarding the citing behaviors of authors. For instance, it could be investigated whether the authors cite articles that support/validate their hypotheses without considering the publication venue.

3.1.3. Shapes of citations

There are many ways to cite others’ publications. Some researchers have indicated that the most valuable citation types are those mentioning authors’ names and those with quotations ( Bonzi, 1982 ; Zhu, Turney et al., 2015 ). However, with the massive increase in the number of publications in all scientific fields, researchers have started to cite papers without reading them ( Simkin & Roychowdhury, 2015 ), with multiple citations in a citance potentially signaling this phenomenon. From this point of view, multiple citations have less importance in scientific writing. This study produced interesting results about the shapes of the analyzed citances (see Figure 5 ). Although the chi-square test results pointed to a difference between the IMRaD sections and citation shapes ( χ 2 (12) = 36.644, p < 0.001, V = 0.037) 1 , this difference was not as significant as that found for the citation purposes.

Distribution of citances in the shape and array classes.

Distribution of citances in the shape and array classes.

Unlike the literature studies reporting that citances mentioning author names comprise the most common citation shape ( Bonzi, 1982 ; Taşkın & Al, 2018 ), in the present study, almost half of the citances in the data set were multiple citations. Moreover, quotations were extremely rare. As previously mentioned, the high rate of multiple citations (e.g., citances such as “there are many studies in the literature on this subject” and many cited articles) could indicate citations made without reading the corresponding article. They could also be coercive citations requested by editors or reviewers. As indicated by Yu, Yu, and Wang (2014) , abnormal citing behaviors are common for coercive citation practices. Therefore, future investigations on the citing behaviors of authors who cite multiple sources could be useful.

3.2. Content Analysis of Citances

Table 2 presents the top 20 countries mentioned in the analyzed citances. The “ N of occurrences” column shows the total number of citances including country names. However, although some citances include examples from various countries, some are unique for a specific country. Therefore, we added a column to Table 2 to show single mentions (SMs) of countries. For example, while 156 citances mentioned India, 56 of these included other country names as well; thus, only 100 of the citances were solely regarding India. Table 2 also shows the country self-citation rates for each occurrence (all and SMs).

The most frequently mentioned countries in the analyzed citances

The economic positions of the countries ( Table 2 ) vary, and the only countries on the list that can be classified as Western are the United States and Australia. The other Western countries with the most occurrences are United Kingdom (31) and Italy (30). Central countries, such as Germany, Canada, and Spain, have fewer than 15 occurrences each. This highlights a geopolitical bias, because, even with the shifting advantage of research in Western countries in terms of funding or the number of publications, there is still significant Western cultural hegemony in science ( Marginson, 2021 ).

Moreover, for some countries, such as India and Thailand, the vast majority of their referent citations came from authors affiliated with these countries. This was not the case for China or Egypt, which were more frequently mentioned by authors from outside these countries. This observation highlights the heterogeneity of the positions of non-Western countries in Western-centered academic publishing.

Another important finding was related to cross-country comparisons or examples illustrated in the citances. As shown in Table 2 , some citances included information about more than two countries. We created a co-occurrence network using the country names mentioned in the citances, the results of which are shown in Figure 6 . As demonstrated in the figure, the regional distribution is obvious. The authors cited publications in questionable journals to make comparisons or describe the current state of a specific subject in a particular region. These citations could be explained by the fact that questionable journals are the only easily available journals for many non-Western scholars, and, in the end, questionable journals are the potential publication venues for many publications covering data about these countries.

Co-occurrence of countries in the citances.

Co-occurrence of countries in the citances.

As an addition to Figure 6 , Figure 7 shows a co-occurrence map of the keywords. Four main subject categories were determined: economics (red), second language (blue), education (yellow), and geography-based challenges (green). The clusters show the connections between the countries and the subjects, such as economic development or academic publishing. The green cluster deserves substantial attention, as the citances in this cluster cited papers about populations, gender issues, and governments in Africa and Asia. As presented by Canagarajah (2002) , there are many barriers for scholars from the periphery to successfully publish in mainstream journals from the center (e.g., different writing style requirements, biases of the reviewers, or language issues). Consequently, it is likely that many of those papers on societies in Africa or Asia have been published in questionable journals, mostly because of geopolitical inequalities in academic publishing (i.e., the authors did not meet the central journals’ expectations). We have assumed that these papers are being cited because these subjects are important for understanding the current situation in the world outside Europe and North America.

Co-occurrence of keywords in the citances covering country names.

Co-occurrence of keywords in the citances covering country names.

To verify this assumption, we again analyzed citances that mentioned India, China, or the United States. We chose these countries because India was the most frequently mentioned country, China was the most frequently mentioned non-Western country with the relatively lowest percentage of mentions from authors of the same country, and the United States was the most frequently mentioned Western country. Unsurprisingly, most of the citations did not mention why information about a given country was taken from that and not the other journals. Nonetheless, we found a few examples of authors acknowledging that the literature on a given country was scarce. For instance, in three papers that mentioned India, the authors stated that the available literature for citing was “very limited” ( Ismail & Ahmed, 2019 , p. 228) or that they used it because “no exact data exist on the Indian traditional medicine industry” ( Kloos, 2017 , p. 1). Additionally, in one of the papers mentioning China in the citances, the lack of data on China is mentioned. Articles mentioning the United States show that mentions of a lack of literature are not specific to non-Western countries. However, only one paper mentioned a lack of studies on a certain topic in the United States and then cited a paper from a questionable journal on the topic centered around the United States. Two other citances mentioning a lack of studies on certain topics in the United States cite papers from questionable journals as proof that such studies are present in the context of other countries.

Figure 8 shows the distribution of citances in the purpose class in terms of their mentioning country names. The most interesting result was the rate of citation. Although the number of cases for the data class was lower compared to the other citation classes, the data implies that authors cite the statistics of developing countries (e.g., population, demographics, and economics) to show the current situation in these countries. This validates our previous statement. The chi-square test also confirmed the difference between the class of citations and mentioning country names ( χ 2 (10) = 152.357, p < 0.001, V = 0.087).

Distribution of citances in the purpose class in terms of including country names.

Distribution of citances in the purpose class in terms of including country names.

The results in this section highlight researchers’ need for information about noncentral countries, which is supported by free access to knowledge provided by the transition to the open-access model in scholarly communication. However, this need leads to the practice of citing papers from journals accused of being predatory. One possible explanation for this is that researchers are often unable to find information about themes such as “Islamic banking” or about the language skills of students in non-Western countries in the mainstream literature. To understand this issue more clearly one can use the concept of the mislocated center of scholarly communication, which does not refer to the quality of the journals but to their position in geopolitical relations of power. From this perspective, the study results enable us to point out an important contradiction in the system: the strong delegitimization of certain journals that are mislocated centers of scholarly communication and that US scholars place on predatory journal lists and other scholars’ need to cite papers that provide information about noncentral countries that is less frequently found in more Eurocentric central journals.

This study aims to understand the content of the citations in WoS-indexed journals referring to questionable journals and to reveal whether these journals are as “worthless” as they are often perceived. Overall, in the present study, the citations referring to questionable journals did not show substantial differences from the legitimate content-based analysis literature. Moreover, the distribution of the citances in the citation classes closely followed that reported in the literature. In the current content-based citation analysis literature, positive and negative citations are extremely rare. For example, Taşkın and Al (2018) found that 2.0% of citances in the library and information science literature are positive and 0.2% are negative. These rates are similar to those found in the present study. However, this result was expected.

Questionable publishing is characterized by publication and editorial practices such as illegitimate peer review or misleading advertisements. A journal can be considered questionable by soliciting articles without considering their quality or contributions to science. Moreover, a conference organizer can be predatory by organizing more than 100,000 conferences in 3 years without considering the scientific contributions of the proceedings. Researchers can use questionable publication channels by publishing papers with the sole aim of obtaining tenure or other incentives. However, a scientific article cannot be questionable in the sense implied by the discussion on “predatory publishing,” especially in the unequal world of the publishing sector. The current scholarly publishing sector does not commonly consider the quality levels of articles (although there are various article-level indicators) or the contributions to science. The main evaluation mechanism of scholarly articles is still a publication venue and its metrics. However, scientific articles are a contribution to current scientific heritage, spreading knowledge across disciplines, sharing research findings, and creating new paths for new studies. The present study supports arguments regarding the importance of including a geopolitical dimension to the analysis of questionable journals. Such analysis would not be limited to assessing the difference between legitimate and questionable journals or developing new metrics but would also consider the enhancement of the accessibility (in terms of authorship or readership) of academic publishing to scholars from all regions of the world.

The practice of counting citations as well as the term predatory journals leads to a simplified conception of the issue of citing questionable journals. Studies previously analyzing citations referring to predatory journals have not questioned the predatory label—regardless of their findings. When Moussa (2021) observed a high number of citations referring to predatory journals in marketing, he stated that the risk of “infecting” (p. 503) scholarly literature is high. Additionally, when Frandsen (2017) found a low number of citations, she stated that the risk of danger from these journals is lower. However, considering the content of citations enables us to look beyond the issue of assumed predators.

An important finding of this study is that the majority of noncentral countries were mentioned by authors affiliated with these same countries. This leaves room for further studies on regional networks of citations that are not influenced as much by the international prestige of the journal in which the cited article is published. However, this finding was not the same for every country in this study. Although 89.1% of mentions of India were written by Indian authors, only half of the mentions of China were written by Chinese authors. This does not undermine our finding that the need for information about non-Western countries is at least partially fulfilled by citing articles from journals accused of being predatory. Such findings reveal the complexity of the geopolitical issues surrounding academic publishing. One such issue is the often-biased processes of legitimization and delegitimization of journals and articles which can be influenced by the arbitrary academic writing norms ( Canagarajah, 2002 ) or prejudices against open access journals ( Krawczyk & Kulczycki, 2021a ). Another issue is a general perception of which countries deserve to be mentioned in scholarly articles that also could influence observed citation patterns. If we reduce these complex issues to a blanket warning against citing predatory journals, we will only deepen geopolitical inequalities in academia instead of counteracting them.

However, paradoxically, without addressing the contradiction between the practice of accusing journals of being predatory and the practice of citing papers from these same journals, an unequal division between the centers and peripheries of science will again be supported. Our findings show that understanding some journals as mislocated centers of scholarly communication is relevant for analyzing questionable journals. When knowledge from the center is an important source of legitimization outside it ( Rodriguez Medina, 2014 ), from the perspective of some scholars in the peripheries, mislocated centers of scholarly communication seem to be part of the center while they are mostly invisible or considered illegitimate by scholars in the center ( Krawczyk & Kulczycki, 2021b ). A few citations in good articles in central journals can lead scholars from the periphery to believe they are published in the “right” journal. At the same time, however, from the perspective of many central scholars or institutions using lists of predatory journals, these same scholars will be suspected of fraudulent behavior because they published in a journal on the lists.

Such findings prove the usefulness of content-based citation analysis, and this study contributes to further development in this area of study. As an important difference from the legitimate content-based citation analysis literature, the present study found that validation or comparison citations in the discussion and conclusion sections were more common. This may indicate that the authors’ main purpose behind their citations was to support or counter their views—regardless of the publication venue. This approach may be open to criticism, but, as suggested by the San Francisco Declaration on Research Assessment ( San Francisco Declaration on Research Assessment, n.d. ), output-based assessments instead of journal-based metrics should be used to assess the quality of articles. However, all studies on predatory journals evaluate journals, not articles. To be able to consider the articles themselves worthless, we must focus at that level.

Another important difference is the high rate of multiple citations. Authors often tend to cite collectively when they have not read the cited sources ( Simkin & Roychowdhury, 2015 ). This may be explained by changes in authors’ motivations to cite in the “publish-or-perish” world, but it also provides important insights into the problems of citation-based performance evaluation models. Our findings validated that not all citations used as quality indicators in academia are of equal value. Citation counts are just numbers, and they do not describe the quality of the articles.

One of the important findings of this study was the existence of citations in the reference list that were not cited in the text. Although citing behaviors of some fields (e.g., various subfields of history) is to show that the author is aware of the cited literature, the fact that these missing citations were frequently in the journals indexed in the ESCI may indicate that the editorial processes of the journals in this index are more superficial than those of JCR journals. For this reason, to minimize editorial errors, editors or editorial boards must work with a checklist and ensure the accuracy of citations.

This study showed that obtaining knowledge about non-Western countries is an important part of the phenomenon of citing questionable journals. This finding can help us argue that the main question we are dealing with is not how to eliminate all questionable or predatory journals most efficiently but, rather, how to provide better ways to communicate knowledge from many regions in Asia or Africa. To minimize the problems created by this situation, research performance evaluation models that take into account local publication practices should be developed, the diamond open-access action plan based on community publishing should be supported ( Ancion, Borrell-Damián et al., 2022 ), and researchers should be prevented from losing their valuable work to questionable publishers. In this way, effective publishing practices will become widespread, and the problem of questionable journals will be minimized.

5.1. Conceptual Limitations to Overcome in Future Studies

In this study, we analyzed the connections between WoS-indexed and questionable journals referred to by citations. However, we ignored factors such as the status, reputation, or level of journals in each category. For this reason, future investigations and multidimensional analyzes are needed to consider all angles of the subject, including author groups, publication languages, and center and periphery collaborative papers.

We evaluated the contents of citations referring to questionable journals and revealed some geographical findings for peripheral countries. However, to make accurate comparisons, some follow-up analysis for the articles in the legitimate literature is needed.

5.2. Methodological Limitations of Content-Based Citation Analysis

5.2.1. understanding the positive and negative meanings of citations.

The citation meaning class includes positive, negative, neutral, and positive and negative citations. The main aim of this classification is to understand the perceived sentiments of citers when engaging in citing. The tagging results showed that only 1.7% of the citances were positive and 0.8% of the citances were negative. This distribution was expected. Studies in the literature have revealed that positive and, especially, negative citations are extremely rare ( Lacetera & Oettl, 2015 ; Spiegel-Rosing, 1977 ; Taşkın & Al, 2018 ). However, we would like to highlight a more important issue: the challenges of understanding the meanings of the citations.

Even the first tagger disagreed with their initial decision in the next tagging session and changed some tags to neutral. This was seen in all citation classes, but predominantly for the positive citations. This means that the meaning of the citation can change even for the same person based on the tagger’s mood on the day of the tagging, the noise in the environment in which the tag is made, or other reasons. This is important in terms of showing the difficulty of accurate classification in content-based citation analysis, especially for positive and negative citations.

Some words, such as useful , comprehensively , significantly , or influential , seemed to be used frequently around citations. These words were not always used to describe the cited article. For this reason, it is often difficult to understand to what the related words refer. The main motivation for the citation can be understood only by asking the author; however, it is not possible for researchers who cite an average of 30 sources in each article and read more to remember their views on the sources they cite. This highlights the difficulty of the meaning-based classification of citations.

Although some contradictory findings were presented, and comparisons were made between the cited and citing papers, some taggers considered these citations negative, while others indicated that these citances could not be considered negative. In such cases, it was difficult to distinguish between contradictory results and negative citations.

As evidenced by previous studies in the literature ( Taşkın & Al, 2018 ), positive and especially negative citations are made in a very polite and implicit way. It is always difficult to understand the positive or negative intentions of citers in one sentence. For this reason, it is not only challenging for machine learning algorithms but also for humans to distinguish the true meaning of citations.

Distribution of citances in the meaning class and agreement rates.

Distribution of citances in the meaning class and agreement rates.

5.2.2. Finding citances in the texts

A citance can consist of one or more sentences. Although linking words, such as although , however , and those , are helpful for citances containing more than one sentence, this method does not always work. With the scope of the present study, whole paragraphs were considered in the tagging process, but it is difficult to perform the same process in automated systems because there is a need to propose rule lists for automated systems. To ensure their correct classification, it is important to be able to understand where the citance begins and ends.

Special characters in surnames or mistakes made by citers (e.g., citations of a first name, not a surname) create problems for finding citances in the full texts. This was one of the main limitations of this study. Mistakes by authors and the lack of control of editors can make content-based citation analysis complicated. To remedy this issue, referencing styles must be applied correctly by authors and editors alike.

5.2.3. Classifying multiple citations

As presented in the previous sections, multiple citations were a common practice of citances in the data set. However, it was very difficult to distinguish to which source the authors were referring, especially in citances containing two different interpretations ( Abu-Jbara & Radev, 2012 ). Overcoming this challenge in content-based citation analysis is difficult, because, if the author did not comment by pointing specifically, it is impossible to identify to which work the author referred.

It is obvious that the future of citation analysis lies in content-based citation analysis. This study showed that such analysis helps to go beyond simplified divisions in highly cited and predatory journals. However, this study also confirmed that content-based citation analyses have many challenges, from data quality issues to difficulties in understanding the content. Therefore, it is important to solve these issues before applying content-based analysis to current performance evaluation systems. We expect machines to classify citations in terms of their meanings, but even experts cannot do this accurately. Considering that machine learning systems are trained by humans, there is a need for more developments in machine classification systems before using these schemes in research evaluation systems.

We would like to thank Marek Hołowiecki and Abdulkadir Taşkın for their support in creating data sets, collecting data, and designing databases and interfaces.

Zehra Taşkın: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Visualization, Writing—original draft, Writing—review & editing. Franciszek Krawczyk: Data curation, Investigation, Writing—original draft, Writing—review & editing. Emanuel Kulczycki: Conceptualization, Data curation, Funding acquisition, Investigation, Methodology, Writing—original draft, writing—review & editing.

The authors have no conflicts of interest.

This work was financially supported by the National Science Centre in Poland (Grant Number UMO-2017/26/E/HS2/00019).

Full data (coded citances, list of articles) for this project are available at https://osf.io/chsgp .

Mentions of the author name(s), multiple citations, quotations, and N/A classes were included for the statistical tests. When all classes were included, the test results were χ 2 (28) = 230.336, p < 0.001, V = 0.076.

Examples of citation sentences for each citation class:

Author notes

Email alerts, affiliations.

  • Online ISSN 2641-3337

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

International Journal of Current Research and Review

Discontinued in Scopus as of 2021

international journal of current science research and review predatory

Subject Area and Category

  • Biochemistry, Genetics and Molecular Biology (miscellaneous)
  • Dentistry (miscellaneous)
  • Health Professions (miscellaneous)

Radiance Research Academy

Publication type

09755241, 22312196

2014, 2019-2021

international journal of current science research and review predatory

The set of journals have been ranked according to their SJR and divided into four equal groups, four quartiles. Q1 (green) comprises the quarter of the journals with the highest values, Q2 (yellow) the second highest values, Q3 (orange) the third highest values and Q4 (red) the lowest values.

The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is.

Evolution of the number of published documents. All types of documents are considered, including citable and non citable documents.

This indicator counts the number of citations received by documents from a journal and divides them by the total number of documents published in that journal. The chart shows the evolution of the average number of times documents published in a journal in the past two, three and four years have been cited in the current year. The two years line is equivalent to journal impact factor ™ (Thomson Reuters) metric.

Evolution of the total number of citations and journal's self-citations received by a journal's published documents during the three previous years. Journal Self-citation is defined as the number of citation from a journal citing article to articles published by the same journal.

Evolution of the number of total citation per document and external citation per document (i.e. journal self-citations removed) received by a journal's published documents during the three previous years. External citations are calculated by subtracting the number of self-citations from the total number of citations received by the journal’s documents.

International Collaboration accounts for the articles that have been produced by researchers from several countries. The chart shows the ratio of a journal's documents signed by researchers from more than one country; that is including more than one country address.

Not every article in a journal is considered primary research and therefore "citable", this chart shows the ratio of a journal's articles including substantial research (research articles, conference papers and reviews) in three year windows vs. those documents other than research articles, reviews and conference papers.

Ratio of a journal's items, grouped in three years windows, that have been cited at least once vs. those not cited during the following year.

Scimago Journal & Country Rank

Leave a comment

Name * Required

Email (will not be published) * Required

* Required Cancel

The users of Scimago Journal & Country Rank have the possibility to dialogue through comments linked to a specific journal. The purpose is to have a forum in which general doubts about the processes of publication in the journal, experiences and other issues derived from the publication of papers are resolved. For topics on particular articles, maintain the dialogue through the usual channels with your editor.

Scimago Lab

Follow us on @ScimagoJR Scimago Lab , Copyright 2007-2024. Data Source: Scopus®

international journal of current science research and review predatory

Cookie settings

Cookie Policy

Legal Notice

Privacy Policy

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Pharmacol
  • v.51(3); May-Jun 2019

Scrutinizing predator journals in pharmacology and calculating their predatory rate

Kopal sharma.

Department of Pharmacology, SMS Medical College, Jaipur, Rajasthan, India

Lokendra Sharma

Background:.

As the list of predatory journals is burgeoning, the researchers should have knowledge of calculating the predatory rate (PR) for the journals, in which they aim to publish their work and self-guard them from publishing in bogus journals.

AIM AND OBJECTIVES:

Our aim is to find out the predatory rate for various Pharmacology journals.

MATERIALS AND METHODS:

Here, we have examined the recently updated list (in 2017) of standalone predatory journals created and maintained by Beall, pertinent to all auspices of pharmacology including pharmacy, pharmaceutical, and pharmacognosy. The PR of various journals was calculated.

Of 131 journals, pertinent to the pharmacology field, 45.03% of them had the PR between 0.72 and 0.84. 98.5% of journals were classified as predatory, whereas only 2 (1.53%) journals were classified in the category of predatory practice.

CONCLUSION:

It should be an eye-opener to the researchers, and they should deliberately select the journals to get real recognition of their work.

Introduction

With the advancing internet era, the entire process of scholarly publishing has now been revolutionized completely.[ 1 ] Many fraudulent journals have mushroomed, and even pharmacology field is now drenched with them. Very often, gullible researchers are trapped in a bait laid by them as getting their research work published is mandatory now-a-days due to stringent medical council of India (MCI) norms, which necessitates few prescribed number of publications for every promotion and even for good placements in the field of medical education.[ 2 ] The inexperienced researchers are also easy prey to them as they can face obstacles to deduce the integrity of journals in their respective field.[ 3 ]

A librarian from Denver city named Jeffrey Beall first introduced the term “predatory journals” and has been the pioneer in the campaign against fraudulent publishing practices.[ 4 ] Few important cardinal features that help differentiate legitimate journals from predatory journals are: 1) Use of catchy words such as “international,” “global,” “world,” and “universal” in their title to attract attention of authors. 2) Multidisciplinary nature. 3) Fast publication and quick review process in 2–3 days to a week. 4) Spurious claims of being indexed in well-acknowledged databases such as PubMed, Directory of Open Access Journals, or even Web of Science. 5) Use fictitious impact factors such as Global Impact Factors and Universal Impact Factor. 6) Use of generic E-mail address like Gmail or yahoo mail. 7) Publish enormous manuscripts in each issue and send regular invitation for the manuscript submission loading author's mailbox with number of spam mail.[ 5 ]

If authors are aware of how to calculate the predatory rate (PR) of the journals, at least, they can have a chance to rethink before submitting their valuable work in bogus journals and regret later on.

Materials and Methods

In our study, we have examined the recently updated list (in 2017) of standalone predatory journals, pertinent to all auspices of pharmacology including pharmacy, pharmaceutical, and pharmacognosy. This list was accessed from the internet, and all the journals related to pharmacology were segregated out. A total of 32 Journals in the list were pertinent to pharmacology whereas 110 were related to pharmacy, pharmaceutical, and pharmacognosy fields combined as a whole. The PR was evaluated for all the journals based on the modified work of Dadkhah and Bianciardi.[ 5 ]

Evaluation: For calculation of the PR, each criterion mentioned in Table 1 was given a weight between 0 and 2. Example: considering the editorial process of the journal, if journal mentions generic E-mail of the editor that is either Gmail or Yahoo mail, then a score of 1 is given; if E-mail of the editor is not mentioned at all, then a score of 2 is given; otherwise if official E-mail of the editor is mentioned, then it is scored as 0 for this criteria. Each and every scored criterion was then added and divided by fifteen, as there are total fifteen criteria. PR was calculated by the given formula:

Different Criteria’s for scoring predatory journal based on modified work of Dadkhah and Bianciardi[ 5 ]

An external file that holds a picture, illustration, etc.
Object name is IJPharm-51-208-g001.jpg

Scoring: The scoring was done as per previous study of Tosti et al .[ 6 ]

It was taken into account that if:

  • PR = 0, it means that the journal is not a predatory.
  • PR <0.22, it means that the journal uses predatory practices.
  • PR >0.22, it means that the journal is predatory.

The different criteria taken into account for the calculation of PR are depicted in [ Table 1 ]. A total of twenty-two pharmacology journals were segregated from the Beall's list, 2017. The highest PR was calculated to be 0.86, while the lowest was 0.2 [ Table 2 ]. Maximum 59 (45.03%) of the journals had PR between 0.72 and 0.84 as shown in Figure 1 .

Predatory rate for various pharmacology journals

An external file that holds a picture, illustration, etc.
Object name is IJPharm-51-208-g002.jpg

Predatory range for different journals

Our study reveals the fact that predatory practice is like that virus outbreak that has infected even the pharmacology field and intoxicated it to such an extent that it becomes difficult to judge good journals from the heap of bad ones. In the current scenario, there is a rat race for quantity of publishing, rather than quality work. The journal having PR above 0.22 should never be chosen for manuscript submission due to their high predatory ranking.

The result of our study is comparable to a previous study of Memon where the authors have calculated the predatory value of those journals from which they received E-mails for publications over a period of 1 year.[ 7 ] Only one journal in this study has PR between 0.2 and 0.32. Similarly, in our study also, only two journals of pharmacology have PR between these values. The highest PR in our study was 0.96, while in the study of Memon,[ 7 ] the highest PR was 1.

In our study, 1.53% of journals were classified under predatory practices while 98.5% were classified as predatory. This is contradictory to the study of Tosti and Maddy where 10.5% of the dermatology journals were classified under predatory practices and the rest 89.5% as predatory journals.[ 6 ]

The strength of our study is that it enlists all the pharmacology journals with the predatory nature as per Beall's list. This can help the authors to choose among the different journals while submitting their valuable research work. The limitation of our study is that we could not include all the journals in the Beall's List as our aim was to scrutinize only the pharmacology-specific journals.

Awareness programs in the form of symposiums and workshops on scientific writing and how to publish can be undertaken by medical education unit of different medical colleges where all the medical teachers can be trained to choose a suitable journal for their publication. As the majority of the pharmacology journals in Beall's list were found to be predatory, it is recommended that the authors should first calculate the predatory score for a particular journal, and based on it, they should decide whether to submit their work in that journal or not. This will help improve quality publishing.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

international journal of current science research and review predatory

Social Links

  • Editorial Board
  • Online Submission
  • Current Issue

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • 13 January 2020

Predatory-journal papers have little scientific impact

  • Dalmeet Singh Chawla

You can also search for this author in PubMed   Google Scholar

Papers published in ‘predatory’ journals attract little attention from scientists, and get cited much less than those in reputable publications, an analysis shows.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-020-00031-6

Björk, B.-C., Kanto-Karvonen, S. & Harviainen, J. T. Preprint at https://arxiv.org/abs/1912.10228 (2019).

Shen, C. & Björk, B.-C. BMC Med. 13 , 230 (2015).

Article   Google Scholar  

Download references

Reprints and permissions

Related Articles

international journal of current science research and review predatory

Pay-to-view blacklist of predatory journals set to launch

Is ChatGPT corrupting peer review? Telltale words hint at AI use

Is ChatGPT corrupting peer review? Telltale words hint at AI use

News 10 APR 24

Rwanda 30 years on: understanding the horror of genocide

Rwanda 30 years on: understanding the horror of genocide

Editorial 09 APR 24

How I harnessed media engagement to supercharge my research career

How I harnessed media engagement to supercharge my research career

Career Column 09 APR 24

Junior Group Leader Position at IMBA - Institute of Molecular Biotechnology

The Institute of Molecular Biotechnology (IMBA) is one of Europe’s leading institutes for basic research in the life sciences. IMBA is located on t...

Austria (AT)

IMBA - Institute of Molecular Biotechnology

international journal of current science research and review predatory

Open Rank Faculty, Center for Public Health Genomics

Center for Public Health Genomics & UVA Comprehensive Cancer Center seek 2 tenure-track faculty members in Cancer Precision Medicine/Precision Health.

Charlottesville, Virginia

Center for Public Health Genomics at the University of Virginia

international journal of current science research and review predatory

Husbandry Technician I

Memphis, Tennessee

St. Jude Children's Research Hospital (St. Jude)

international journal of current science research and review predatory

Lead Researcher – Department of Bone Marrow Transplantation & Cellular Therapy

Researcher in the center for in vivo imaging and therapy.

international journal of current science research and review predatory

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

COMMENTS

  1. Hundreds of 'predatory' journals indexed on leading ...

    The widely used academic database Scopus hosts papers from more than 300 potentially 'predatory' journals that have questionable publishing practices, an analysis has found 1. Together, these ...

  2. Predatory Journals: What They Are and How to Avoid Them

    Table 1. Common Characteristics of Predatory Journals. Claims to be a peer reviewed open access publication but does not provide adequate peer review or the level of peer review promised (some predatory journals repeatedly use a template as their peer review report). 6. Advertises a Journal Impact Factor or other citation metric on the website ...

  3. PDF Five Predatory Mega-Journals: A Review

    of Science, International Journal of Current Research, International Journal of Science and Advanced Technology (IJSAT), International Journal of Sciences (IJSciences), and World Journal of Science and Technology. Each of these journals copies the mega-journal model pioneered by PLOS ONE. That is, they have a broad scope, they per-

  4. Predatory publishers' latest scam: bootlegged and rebranded papers

    Falsifying peer review on a large scale would be very difficult for egregious predatory journals. Quasi-predatory journals would reveal poor-quality or ignored reviews. High-status journals ...

  5. Predatory Journals: What They Are and How to Avoid Them

    Abstract. Predatory journals—also called fraudulent, deceptive, or pseudo-journals—are publications that claim to be legitimate scholarly journals but misrepresent their publishing practices. Some common forms of predatory publishing practices include falsely claiming to provide peer review, hiding information about article processing ...

  6. Potential predatory and legitimate biomedical journals: can you tell

    Ninety-three potential predatory journals, 99 open access journals, and 100 subscription-based journals were included in the analysis. The process of journal identification, inclusion, and exclusions within each study group is outlined in Fig. 1; 397 journals were identified as potential predatory journals.After de-duplication and screening for journals publishing biomedical content, 156 ...

  7. Beall's List

    Antarctic Journals. Aperito Online Publishing. Apex Journal. Applied Science Innovations ( note: their journal "Carbon: Science and Technology" is indexed by DOAJ) APST Publication. Arabian Group of Journals (AGJ) Aradhya International Publication. ARC Journals. Archers & Elevators Publishing House.

  8. Predatory journals and their practices present a conundrum for

    Furthermore, participants described the practical issues of including studies published within a predatory journal in an evidence synthesis, these included understanding research capacity [of the review team], the legality of publicly labelling studies as predatory, and reporting the article from the predatory journal within the evidence synthesis.

  9. Rising number of 'predatory' academic journals undermines research and

    These for-profit publications are often known as predatory journals because they are publications that claim to be legitimate scholarly journals but prey on unsuspecting academics to pay to ...

  10. Keeping medical science trustworthy: The threat by predatory journals

    The essentials of predatory journals. 1. Not the main but the only objective is to earn money. 2. The reliability of the published 'scientific' articles is not important. 3. The subject of the published manuscripts is not important at all and is frequently not related to the subjects mentioned in the journal's title.

  11. Predatory journals: evolution keeps them under the radar

    In their early days, such journals were ephemeral, with false claims of indexing, vague titles (such as International Journal of Applied Sciences and Engineering), fraudulent publication fees and ...

  12. How to identify predatory journals in a search

    They may accept articles for publication without conducting a thorough peer review of the research methodology and content. 1-4 Importantly, articles in predatory journals may contain errors and misleading information, have significantly flawed research methods, and even include plagiarized content. 1-7 In a study of nursing predatory journals ...

  13. Are papers published in predatory journals worthless? A geopolitical

    Abstract. This study uses content-based citation analysis to move beyond the simplified classification of predatory journals. We present that, when we analyze papers not only in terms of the quantity of their citations but also the content of these citations, we are able to show the various roles played by papers published in journals accused of being predatory. To accomplish this, we analyzed ...

  14. Predatory journals in psychiatry

    The first predatory journal was launched in 2007, and the most recent one in 2018. The predatory journals published 6925 articles in total between Jan 30, 2007, and Feb 20, 2019, with a mean of 54·96 articles per journal ranging from zero to 836 articles. These articles had received a total of 19673 citations in that time (the citation data ...

  15. Analysis of potential predatory journals in radiology

    The term predatory journal was first coined by Jeffrey Beall, a librarian at the University of Colorado, to describe a fraudulent open-access model that applies charges to the authors under the pretense of legitimate publishing operations without providing adequate editorial services, including proper peer-review, as with legitimate journals ().At present, more than 10 000 predatory journals ...

  16. International Journal of Current Science Research and Review

    International Journal of Current Science Research and Review Publish original research work of multidisciplinary field of Science .The Journal is welcoming original Research Articles, Book Reviews, Commentaries, Reviewed Articles, Technical Notes, Snippets, Case Studies, Theses and Dissertations relevant to the fields of all subject .

  17. International Journal of Current Research and Review

    The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals ...

  18. Scrutinizing predator journals in pharmacology and calculating their

    Similarly, in our study also, only two journals of pharmacology have PR between these values. The highest PR in our study was 0.96, while in the study of Memon, [ 7] the highest PR was 1. In our study, 1.53% of journals were classified under predatory practices while 98.5% were classified as predatory. This is contradictory to the study of ...

  19. search

    Journal Cover Page; Information. Journal Indexing; Author Instructions; Publication Fee; Archives. Vol 7 Year 2024. Volume 07 Issue 04 April 2024; Volume 07 Issue 03 March 2024; Volume 07 Issue 02 February 2024; Volume 07 Issue 01 January 2024; ... Current Issue; Archives; Contact Us; Register; Login;

  20. Predatory-journal papers have little scientific impact

    Predatory journals are those that charge authors high article-processing fees but don't provide expected publishing services, such as peer review or other quality checks.