Метаисследование как пишется

Metascience (also known as meta-research) is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as «research on research» and «the science of science«, as it uses research methods to study how research is done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as «a bird’s eye view of science».[1] In the words of John Ioannidis, «Science is the best thing that has happened to human beings … but we can do it better.»[2]

In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten high-profile medical journals. It found that «in almost 73% of the reports read … conclusions were drawn when the justification for these conclusions was invalid.» Meta-research in the following decades found many methodological flaws, inefficiencies, and poor practices in research across numerous scientific fields. Many scientific studies could not be reproduced, particularly in medicine and the soft sciences. The term «replication crisis» was coined in the early 2010s as part of a growing awareness of the problem.[3]

Measures have been implemented to address the issues revealed by metascience. These measures include the pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methodology and reporting. There are continuing efforts to reduce the misuse of statistics, to eliminate perverse incentives from academia, to improve the peer review process, to systematically collect data about the scholarly publication system,[4] to combat bias in scientific literature, and to increase the overall quality and efficiency of the scientific process.

History[edit]

In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten high-profile medical journals. It found that, «in almost 73% of the reports read … conclusions were drawn when the justification for these conclusions was invalid.»[6] In 2005, John Ioannidis published a paper titled «Why Most Published Research Findings Are False», which argued that a majority of papers in the medical field produce conclusions that are wrong.[5] The paper went on to become the most downloaded paper in the Public Library of Science[7][8] and is considered foundational to the field of metascience.[9] In a related study with Jeremy Howick and Despina Koletsi, Ioannidis showed that only a minority of medical interventions are supported by ‘high quality’ evidence according to The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. [10] Later meta-research identified widespread difficulty in replicating results in many scientific fields, including psychology and medicine. This problem was termed «the replication crisis». Metascience has grown as a reaction to the replication crisis and to concerns about waste in research.[11]

Many prominent publishers are interested in meta-research and in improving the quality of their publications. Top journals such as Science, The Lancet, and Nature, provide ongoing coverage of meta-research and problems with reproducibility.[12] In 2012 PLOS ONE launched a Reproducibility Initiative. In 2015 Biomed Central introduced a minimum-standards-of-reporting checklist to four titles.

The first international conference in the broad area of meta-research was the Research Waste/EQUATOR conference held in Edinburgh in 2015; the first international conference on peer review was the Peer Review Congress held in 1989.[13] In 2016, Research Integrity and Peer Review was launched. The journal’s opening editorial called for «research that will increase our understanding and suggest potential solutions to issues related to peer review, study reporting, and research and publication ethics».[14]

Fields and topics of meta-research[edit]

An exemplary visualization of a conception of scientific knowledge generation structured by layers, with the «Institution of Science» being the subject of metascience.

Metascience can be categorized into five major areas of interest: Methods, Reporting, Reproducibility, Evaluation, and Incentives. These correspond, respectively, with how to perform, communicate, verify, evaluate, and reward research.[1]

Methods[edit]

Metascience seeks to identify poor research practices, including biases in research, poor study design, abuse of statistics, and to find methods to reduce these practices.[1] Meta-research has identified numerous biases in scientific literature.[15] Of particular note is the widespread misuse of p-values and abuse of statistical significance.[16]

Scientific data science[edit]

Scientific data science is the use of data science to analyse research papers. It encompasses both qualitative and quantitative methods. Research in scientific data science includes fraud detection[17] and citation network analysis.[18]

Journalology[edit]

Journalology, also known as publication science, is the scholarly study of all aspects of the academic publishing process.[19][20] The field seeks to improve the quality of scholarly research by implementing evidence-based practices in academic publishing.[21] The term «journalology» was coined by Stephen Lock, the former editor-in-chief of The BMJ. The first Peer Review Congress, held in 1989 in Chicago, Illinois, is considered a pivotal moment in the founding of journalology as a distinct field.[21] The field of journalology has been influential in pushing for study pre-registration in science, particularly in clinical trials. Clinical-trial registration is now expected in most countries.[21]

Reporting[edit]

Meta-research has identified poor practices in reporting, explaining, disseminating and popularizing research, particularly within the social and health sciences. Poor reporting makes it difficult to accurately interpret the results of scientific studies, to replicate studies, and to identify biases and conflicts of interest in the authors. Solutions include the implementation of reporting standards, and greater transparency in scientific studies (including better requirements for disclosure of conflicts of interest). There is an attempt to standardize reporting of data and methodology through the creation of guidelines by reporting agencies such as CONSORT and the larger EQUATOR Network.[1]

Reproducibility[edit]

The replication crisis is an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate.[22][23] While the crisis has its roots in the meta-research of the mid- to late-1900s, the phrase «replication crisis» was not coined until the early 2010s[24] as part of a growing awareness of the problem.[1] The replication crisis particularly affects psychology (especially social psychology) and medicine,[25][26] including cancer research.[27][28] Replication is an essential part of the scientific process, and the widespread failure of replication puts into question the reliability of affected fields.[29]

Moreover, replication of research (or failure to replicate) is considered less influential than original research, and is less likely to be published in many fields. This discourages the reporting of, and even attempts to replicate, studies.[30][31]

Evaluation and incentives[edit]

Metascience seeks to create a scientific foundation for peer review. Meta-research evaluates peer review systems including pre-publication peer review, post-publication peer review, and open peer review. It also seeks to develop better research funding criteria.[1]

Metascience seeks to promote better research through better incentive systems. This includes studying the accuracy, effectiveness, costs, and benefits of different approaches to ranking and evaluating research and those who perform it.[1] Critics argue that perverse incentives have created a publish-or-perish environment in academia which promotes the production of junk science, low quality research, and false positives.[32][33] According to Brian Nosek, «The problem that we face is that the incentive system is focused almost entirely on getting research published, rather than on getting research right.»[34] Proponents of reform seek to structure the incentive system to favor higher-quality results.[35] For example, by quality being judged on the basis of narrative expert evaluations («rather than [only or mainly] indices»), institutional evaluation criteria, guaranteeing of transparency, and professional standards.[36]

Contributorship

Studies proposed machine-readable standards and (a taxonomy of) badges for science publication management systems that hones in on contributorship – who has contributed what and how much of the research labor – rather that using traditional concept of plain authorship – who was involved in any way creation of a publication.[37][38][39][40] A study pointed out one of the problems associated with the ongoing neglect of contribution nuanciation – it found that «the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers».[41]

Assessment factors

Factors other than a submission’s merits can substantially influence peer reviewers’ evaluations.[42] Such factors may however also be important such as the use of track-records about the veracity of a researchers’ prior publications and its alignment with public interests. Nevertheless, evaluation systems – include those of peer-review – may substantially lack mechanisms and criteria that are oriented or well-performingly oriented towards merit, real-world positive impact, progress and public usefulness rather than analytical indicators such as number of citations or altmetrics even when such can be used as partial indicators of such ends.[43][44] Rethinking of the academic reward structure «to offer more formal recognition for intermediate products, such as data» could have positive impacts and reduce data withholding.[45]

Recognition of training

A commentary noted that academic rankings don’t consider where (country and institute) the respective researchers were trained.[46]

Scientometrics[edit]

Scientometrics concerns itself with measuring bibliographic data in scientific publications. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts.[47] Studies suggest that «metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades» and have to some degrees «ceased» to be good measures,[41] leading to issues such as «overproduction, unnecessary fragmentations, overselling, predatory journals (pay and publish), clever plagiarism, and deliberate obfuscation of scientific results so as to sell and oversell».[48]

Novel tools in this area include systems to quantify how much the cited-node informs the citing-node.[49] This can be used to convert unweighted citation networks to a weighted one and then for importance assessment, deriving «impact metrics for the various entities involved, like the publications, authors etc»[50] as well as, among other tools, for search engine- and recommendation systems.

Science governance[edit]

Science funding and science governance can also be explored and informed by metascience.[51]

Incentives[edit]

Various interventions such as prioritization can be important. For instance, the concept of differential technological development refers to deliberately developing technologies – e.g. control-, safety- and policy-technologies versus risky biotechnologies – at different precautionary paces to decrease risks, mainly global catastrophic risk, by influencing the sequence in which technologies are developed.[52][53] Relying only on the established form of legislation and incentives to ensure the right outcomes may not be adequate as these may often be too slow[54] or inappropriate.

Other incentives to govern science and related processes, including via metascience-based reforms, may include ensuring accountability to the public (in terms of e.g. accessibility of, especially publicly-funded, research or of it addressing various research topics of public interest in serious manners), increasing the qualified productive scientific workforce, improving the efficiency of science to improve problem-solving in general, and facilitating that unambiguous societal needs based on solid scientific evidence – such as about human physiology – are adequately prioritized and addressed. Such interventions, incentives and intervention-designs can be subjects of metascience.

Science funding and awards[edit]

Cluster network of scientific publications in relation to Nobel prizes.

Funding for climate research in the natural and technical sciences versus the social sciences and humanities[55]

Scientific awards are one category of science incentives. Metascience can explore existing and hypothetical systems of science awards. For instance, it found that work honored by Nobel prizes clusters in only a few scientific fields with only 36/71 having received at least one Nobel prize of the 114/849 domains science could be divided into according to their DC2 and DC3 classification systems. Five of the 114 domains were shown to make up over half of the Nobel prizes awarded 1995–2017 (particle physics [14%], cell biology [12.1%], atomic physics [10.9%], neuroscience [10.1%], molecular chemistry [5.3%]).[56][57]

A study found that delegation of responsibility by policy-makers – a centralized authority-based top-down approach – for knowledge production and appropriate funding to science with science subsequently somehow delivering «reliable and useful knowledge to society» is too simple.[51]

Measurements show that allocation of bio-medical resources can be more strongly correlated to previous allocations and research than to burden of diseases.[58]

A study suggests that «[i]f peer review is maintained as the primary mechanism of arbitration in the competitive selection of research reports and funding, then the scientific community needs to make sure it is not arbitrary».[42]

Studies indicate there to is a need to «reconsider how we measure success» (see #Factors of success and progress).[41]

Funding data

Funding information from grant databases and funding acknowledgment sections can be sources of data for scientometrics studies, e.g. for investigating or recognition of the impact of funding entities on the development of science and technology.[59]

Research questions and coordination[edit]

Scientists often communicate open research questions. Sometimes such questions are crowdsourced and/or aggregated, sometimes supplemented with priorities or other details. A common way open research questions are identified, communicated, established/confirmed and prioritized are their inclusion in scientific reviews of a sub-field or specific research question, including in systematic reviews and meta-analyses. Other channels include reports by science journalists and dedicated (sub-)websites such as 80000hours.org’s «research questions by discipline»[60] or the Wikipedia articles of the lists of unsolved problems,[61][62][63] aggregative/integrative studies,[61] as well as unsolved online posts on Q&A websites and forums, sometimes categorized/marked as unsolved.[64] There have been online surveys used to generate priority research topics which were then classified into broader themes.[65] Such may improve research relevance and value[66] or strengthen rationale for societal dedication of limited resources or expansions of the limited resources or for funding a specific study.[citation needed]

Risk governance[edit]

See also: § Differential R&D

Biosecurity requires the cooperation of scientists, technicians, policy makers, security engineers, and law enforcement officials.[67][68]

Philosopher Toby Ord, in his 2020 book The Precipice: Existential Risk and the Future of Humanity, puts into question whether the current international conventions regarding biotechnology research and development regulation, and self-regulation by biotechnology companies and the scientific community are adequate.[69][70]

In a paywalled article, American scientists proposed various policy-based measures to reduce the large risks from life sciences research – such as pandemics through accident or misapplication. Risk management measures may include novel international guidelines, effective oversight, improvement of US policies to influence policies globally, and identification of gaps in biosecurity policies along with potential approaches to address them.[71][72]

Science communication and public use[edit]

It has been argued that «science has two fundamental attributes that underpin its value as a global public good: that knowledge claims and the evidence on which they are based are made openly available to scrutiny, and that the results of scientific research are communicated promptly and efficiently».[73] Metascientific research is exploring topics of science communication such as media coverage of science, science journalism and online communication of results by science educators and scientists.[74][75][76][77] A study found that the «main incentive academics are offered for using social media is amplification» and that it should be «moving towards an institutional culture that focuses more on how these [or such] platforms can facilitate real engagement with research».[78] Science communication may also involve the communication of societal needs, concerns and requests to scientists.

Alternative metrics tools

Alternative metrics tools can be used not only for help in assessment (performance and impact)[58] and findability, but also aggregate many of the public discussions about a scientific paper in social media such as reddit, citations on Wikipedia, and reports about the study in the news media which can then in turn be analyzed in metascience or provided and used by related tools.[79] In terms of assessment and findability, altmetrics rate publications’ performance or impact by the interactions they receive through social media or other online platforms,[80] which can for example be used for sorting recent studies by measured impact, including before other studies are citing them. The specific procedures of established altmetrics are not transparent[80] and the used algorithms can not be customized or altered by the user as open source software can. A study has described various limitations of altmetrics and points «toward avenues for continued research and development».[81] They are also limited in their use as a primary tool for researchers to find received constructive feedback. (see above)

Societal implications and applications

It has been suggested that it may benefit science if «intellectual exchange—particularly regarding the societal implications and applications of science and technology—are better appreciated and incentivized in the future».[58]

Knowledge integration

Primary studies «without context, comparison or summary are ultimately of limited value» and various types[additional citation(s) needed] of research syntheses and summaries integrate primary studies.[82] Progress in key social-ecological challenges of the global environmental agenda is «hampered by a lack of integration and synthesis of existing scientific evidence», with a «fast-increasing volume of data», compartmentalized information and generally unmet evidence synthesis challenges.[83] According to Khalil, researchers are facing the problem of too many papers – e.g. in March 2014 more than 8,000 papers were submitted to arXiv – and to «keep up with the huge amount of literature, researchers use reference manager software, they make summaries and notes, and they rely on review papers to provide an overview of a particular topic». He notes that review papers are usually (only)» for topics in which many papers were written already, and they can get outdated quickly» and suggests «wiki-review papers» that get continuously updated with new studies on a topic and summarize many studies’ results and suggest future research.[84] A study suggests that if a scientific publication is being cited in a Wikipedia article this could potentially be considered as an indicator of some form of impact for this publication,[80] for example as this may, over time, indicate that the reference has contributed to a high-level of summary of the given topic.

Further information: § Knowledge integration and living documents

Science journalism

Science journalists play an important role in the scientific ecosystem and in science communication to the public and need to «know how to use, relevant information when deciding whether to trust a research finding, and whether and how to report on it», vetting the findings that get transmitted to the public.[85]

Science education[edit]

Some studies investigate science education, e.g. the teaching about selected scientific controversies[86] and historical discovery process of major scientific conclusions,[87] and common scientific misconceptions.[88] Education can also be a topic more generally such as how to improve the quality of scientific outputs and reduce the time needed before scientific work or how to enlarge and retain various scientific workforces.

Science misconceptions and anti-science attitudes[edit]

Many students have misconceptions about what science is and how it works.[89] Anti-science attitudes and beliefs are also a subject of research.[90][91] Hotez suggests antiscience «has emerged as a dominant and highly lethal force, and one that threatens global security», and that there is a need for «new infrastructure» that mitigates it.[92]

Evolution of sciences[edit]

Scientific practice[edit]

Number of authors of research articles in six journals through time[36]

Trends of diversity of work cited, mean number of self-citations, and mean age of cited work may indicate papers are using «narrower portions of existing knowledge».[93]

Metascience can investigate how scientific processes evolve over time. A study found that teams are growing in size, «increasing by an average of 17% per decade».[58] (see labor advantage below)

ArXiv’s yearly submission rate growth over 30 years.[94]

It was found that prevalent forms of non-open access publication and prices charged for many conventional journals – even for publicly funded papers – are unwarranted, unnecessary – or suboptimal – and detrimental barriers to scientific progress.[73][95][96][97] Open access can save considerable amounts of financial resources, which could be used otherwise, and level the playing field for researchers in developing countries.[98] There are substantial expenses for subscriptions, gaining access to specific studies, and for article processing charges. Paywall: The Business of Scholarship is a documentary on such issues.[99]

Another topic are the established styles of scientific communication (e.g. long text-form studies and reviews) and the scientific publishing practices – there are concerns about a «glacial pace» of conventional publishing.[100] The use of preprint-servers to publish study-drafts early is increasing and open peer review,[101] new tools to screen studies,[102] and improved matching of submitted manuscripts to reviewers[103] are among the proposals to speed up publication.

Science overall and intrafield developments[edit]

A visualization of scientific outputs by field in OpenAlex.[104]
A study can be part of multiple fields[clarification needed] and lower numbers of papers is not necessarily detrimental[48] for fields.

Change of number of scientific papers by field according to OpenAlex[104]

Number of PubMed search results for «coronavirus» by year from 1949 to 2020.

Studies have various kinds of metadata which can be utilized, complemented and made accessible in useful ways. OpenAlex is a free online index of over 200 million scientific documents that integrates and provides metadata such as sources, citations, author information, scientific fields and research topics. Its API and open source website can be used for metascience, scientometrics and novel tools that query this semantic web of papers.[105][106][107] Another project under development, Scholia, uses metadata of scientific publications for various visualizations and aggregation features such as providing a simple user interface summarizing literature about a specific feature of the SARS-CoV-2 virus using Wikidata’s «main subject» property.[108]

Subject-level resolutions

Beyond metadata explicitly assigned to studies by humans, natural language processing and AI can be used to assign research publications to topics – one study investigating the impact of science awards used such to associate a paper’s text (not just keywords) with the linguistic content of Wikipedia’s scientific topics pages («pages are created and updated by scientists and users through crowdsourcing»), creating meaningful and plausible classifications of high-fidelity scientific topics for further analysis or navigability.[109]

Further information: § Topic mapping

Growth or stagnation of science overall[edit]

Rough trend of scholarly publications about biomarkers according to Scholia; biomarker-related publications may not follow closely the number of viable biomarkers[110]

The CD index for papers published in Nature, PNAS, and Science and Nobel-Prize-winning papers[93]

The CD index may indicate a «decline of disruptive science and technology»[93]

Metascience research is investigating the growth of science overall, using e.g. data on the number of publications in bibliographic databases. A study found segments with different growth rates appear related to phases of «economic (e.g., industrialization)» – money is considered as necessary input to the science system – «and/or political developments (e.g., Second World War)». It also confirmed a recent exponential growth in the volume of scientific literature and calculated an average doubling period of 17.3 years.[111]

However, others have pointed out that is difficult to measure scientific progress in meaningful ways, partly because it’s hard to accurately evaluate how important any given scientific discovery is. A variety of perspectives of the trajectories of science overall (impact, number of major discoveries, etc) have been described in books and articles, including that science is becoming harder (per dollar or hour spent), that if science «slowing today, it is because science has remained too focused on established fields», that papers and patents are increasingly less likely to be «disruptive» in terms of breaking with the past as measured by the «CD index»,[93] and that there is a great stagnation – possibly as part of a larger trend[112] – whereby e.g. «things haven’t changed nearly as much since the 1970s» when excluding the computer and the Internet.

Better understanding of potential slowdowns according to some measures could be a major opportunity to improve humanity’s future.[113] For example, emphasis on citations in the measurement of scientific productivity, information overloads,[112] reliance on a narrower set of existing knowledge (which may include narrow specialization and related contemporary practices) ,[93] and risk-avoidant funding structures[114] may have «toward incremental science and away from exploratory projects that are more likely to fail».[115] The study that introduced the «CD index» suggests the overall number of papers has risen while the total of «highly disruptive» papers as measured by the index hasn’t (notably, the 1998 discovery of the accelerating expansion of the universe has a CD index of 0). Their results also suggest scientists and inventors «may be struggling to keep up with the pace of knowledge expansion».[116][93]

Various ways of measuring «novelty» of studies, novelty metrics,[115] have been proposed to balance a potential anti-novelty bias – such as textual analysis[115] or measuring whether it makes first-time-ever combinations of referenced journals, taking into account the difficulty.[117] Other approaches include pro-actively funding risky projects.[58] (see above)

Topic mapping[edit]

Science maps could show main interrelated topics within a certain scientific domain, their change over time, and their key actors (researchers, institutions, journals). They may help find factors determine the emergence of new scientific fields and the development of interdisciplinary areas and could be relevant for science policy purposes.[118] (see above) Theories of scientific change could guide «the exploration and interpretation of visualized intellectual structures and dynamic patterns».[119] The maps can show the intellectual, social or conceptual structure of a research field.[120] Beyond visual maps, expert survey-based studies and similar approaches could identify understudied or neglected societally important areas, topic-level problems (such as stigma or dogma), or potential misprioritizations.[additional citation(s) needed] Examples of such are studies about policy in relation to public health[121] and the social science of climate change mitigation where it has been estimated that only 0.12% of all funding for climate-related research is spent on such despite the most urgent puzzle at the current juncture being working out how to mitigate climate change, whereas the natural science of climate change is already well established.

There are also studies that map a scientific field or a topic such as the study of the use of research evidence in policy and practice, partly using surveys.[123]

Controversies, current debates and disagreement[edit]

See also: § scite.ai, and § Topic mapping

Percent of all citances in each field that contain signals of disagreement[124]

Some research is investigating scientific controversy or controveries, and may identify currently ongoing major debates (e.g. open questions), and disagreement between scientists or studies.[additional citation(s) needed] One study suggests the level of disagreement was highest in the social sciences and humanities (0.61%), followed by biomedical and health sciences (0.41%), life and earth sciences (0.29%); physical sciences and engineering (0.15%), and mathematics and computer science (0.06%).[124] Such research may also show, where the disagreements are, especially if they cluster, including visually such as with cluster diagrams.

Challenges of interpretation of pooled results[edit]

Studies about a specific research question or research topic are often reviewed in the form of higher-level overviews in which results from various studies are integrated, compared, critically analyzed and interpreted. Examples of such works are scientific reviews and meta-analyses. These and related practices face various challenges and are a subject of metascience.

A meta-analysis of several small studies does not always predict the results of a single large study.[125] Some have argued that a weakness of the method is that sources of bias are not controlled by the method: a good meta-analysis cannot correct for poor design or bias in the original studies.[126] This would mean that only methodologically sound studies should be included in a meta-analysis, a practice called ‘best evidence synthesis’.[126] Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size.[127] However, others have argued that a better approach is to preserve information about the variance in the study sample, casting as wide a net as possible, and that methodological selection criteria introduce unwanted subjectivity, defeating the purpose of the approach.[128]

Various issues with included or available studies such as, for example, heterogeneity of methods used may lead to faulty conclusions of the meta-analysis.[129]

Knowledge integration and living documents[edit]

Various problems require swift integration of new and existing science-based knowledge. Especially setting where there are a large number of loosely related projects and initiatives benefit from a common ground or «commons».[108]

Evidence synthesis can be applied to important and, notably, both relatively urgent and certain global challenges: «climate change, energy transitions, biodiversity loss, antimicrobial resistance, poverty eradication and so on». It was suggested that a better system would keep summaries of research evidence up to date via living systematic reviews – e.g. as living documents. While the number of scientific papers and data (or information and online knowledge) has risen substantially,[additional citation(s) needed] the number of published academic systematic reviews has risen from «around 6,000 in 2011 to more than 45,000 in 2021».[130] An evidence-based approach is important for progress in science, policy, medical and other practices. For example, meta-analyses can quantify what is known and identify what is not yet known[82] and place «truly innovative and highly interdisciplinary ideas» into the context of established knowledge which may enhance their impact.[58] (see above)

Factors of success and progress[edit]

See also: § Growth or stagnation of science overall

It has been hypothesized that a deeper understanding of factors behind successful science could «enhance prospects of science as a whole to more effectively address societal problems».[58]

Novel ideas and disruptive scholarship

Two metascientists reported that «structures fostering disruptive scholarship and focusing attention on novel ideas» could be important as in a growing scientific field citation flows disproportionately consolidate to already well-cited papers, possibly slowing and inhibiting canonical progress.[131][132] A study concluded that to enhance impact of truly innovative and highly interdisciplinary novel ideas, they should be placed in the context of established knowledge.[58]

Mentorship, partnerships and social factors

Other researchers reported that the most successful – in terms of «likelihood of prizewinning, National Academy of Science (NAS) induction, or superstardom» – protégés studied under mentors who published research for which they were conferred a prize after the protégés’ mentorship. Studying original topics rather than these mentors’ research-topics was also positively associated with success.[133][134] Highly productive partnerships are also a topic of research – e.g. «super-ties» of frequent co-authorship of two individuals who can complement skills, likely also the result of other factors such as mutual trust, conviction, commitment and fun.[135][58]

Study of successful scientists and processes, general skills and activities

The emergence or origin of ideas by successful scientists is also a topic of research, for example reviewing existing ideas on how Mendel made his discoveries,[136] – or more generally, the process of discovery by scientists. Science is a «multifaceted process of appropriation, copying, extending, or combining ideas and inventions» [and other types of knowledge or information], and not an isolated process.[58] There are also few studies investigating scientists’ habits, common modes of thinking, reading habits, use of information sources, digital literacy skills, and workflows.[137][138][139][140][141]

Labor advantage

A study theorized that in many disciplines, larger scientific productivity or success by elite universities can be explained by their larger pool of available funded laborers.[142][143][further explanation needed]

Ultimate impacts

Success (in science) is often measured in terms of metrics like citations, not in terms of the eventual or potential impact on lives and society, which awards (see above) sometimes do.[additional citation(s) needed] Problems with such metrics are roughly outlined elsewhere in this article and include that reviews replace citations to primary studies.[82] There are also proposals for changes to the academic incentives systems that increase the recognition of societal impact in the research process.[144]

Progress studies

A proposed field of «Progress Studies» could investigate how scientists (or funders or evaluators of scientists) should be acting, «figuring out interventions» and study progress itself.[145] The field was explicitly proposed in a 2019 essay and described as an applied science that prescribes action.[146]

As and for acceleration of progress

A study suggests that improving the way science is done could accelerate the rate of scientific discovery and its applications which could be useful for finding urgent solutions to humanity’s problems, improve humanity’s conditions, and enhance understanding of nature. Metascientific studies can seek to identify aspects of science that need improvement, and develop ways to improve them.[84] If science is accepted as the fundamental engine of economic growth and social progress, this could raise «the question of what we – as a society – can do to accelerate science, and to direct science toward solving society’s most important problems.»[147] However, one of the authors clarified that a one-size-fits-all approach is not thought to be right answer – for example, in funding, DARPA models, curiosity-driven methods, allowing «a single reviewer to champion a project even if his or her peers do not agree», and various other approaches all have their uses. Nevertheless, evaluation of them can help build knowledge of what works or works best.[114]

Reforms[edit]

Meta-research identifying flaws in scientific practice has inspired reforms in science. These reforms seek to address and fix problems in scientific practice which lead to low-quality or inefficient research.

A 2015 study lists «fragmented» efforts in meta-research.[1]

Pre-registration[edit]

The practice of registering a scientific study before it is conducted is called pre-registration. It arose as a means to address the replication crisis. Pregistration requires the submission of a registered report, which is then accepted for publication or rejected by a journal based on theoretical justification, experimental design, and the proposed statistical analysis. Pre-registration of studies serves to prevent publication bias (e.g. not publishing negative results), reduce data dredging, and increase replicability.[148][149]

Reporting standards[edit]

Studies showing poor consistency and quality of reporting have demonstrated the need for reporting standards and guidelines in science, which has led to the rise of organisations that produce such standards, such as CONSORT (Consolidated Standards of Reporting Trials) and the EQUATOR Network.

The EQUATOR (Enhancing the QUAlity and Transparency Of health Research)[150] Network is an international initiative aimed at promoting transparent and accurate reporting of health research studies to enhance the value and reliability of medical research literature.[151] The EQUATOR Network was established with the goals of raising awareness of the importance of good reporting of research, assisting in the development, dissemination and implementation of reporting guidelines for different types of study designs, monitoring the status of the quality of reporting of research studies in the health sciences literature, and conducting research relating to issues that impact the quality of reporting of health research studies.[152] The Network acts as an «umbrella» organisation, bringing together developers of reporting guidelines, medical journal editors and peer reviewers, research funding bodies, and other key stakeholders with a mutual interest in improving the quality of research publications and research itself.

Applications[edit]

The areas of application of metascience include ICTs, medicine, psychology and physics.

ICTs[edit]

Metascience is used in the creation and improvement of technical systems (ICTs) and standards of science evaluation, incentivation, communication, commissioning, funding, regulation, production, management, use and publication. Such can be called «applied metascience»[153][better source needed] and may seek to explore ways to increase quantity, quality and positive impact of research. One example for such is the development of alternative metrics.[58]

Study screening and feedback

Various websites or tools also identify inappropriate studies and/or enable feedback such as PubPeer, Cochrane’s Risk of Bias Tool[154] and RetractionWatch. Medical and academic disputes are as ancient as antiquity and a study calls for research into «constructive and obsessive criticism» and into policies to «help strengthen social media into a vibrant forum for discussion, and not merely an arena for gladiator matches».[155] Feedback to studies can be found via altmetrics which is often integrated at the website of the study – most often as an embedded Altmetrics badge – but may often be incomplete, such as only showing social media discussions that link to the study directly but not those that link to news reports about the study. (see above)

Tools used, modified, extended or investigated

Tools may get developed with metaresearch or can be used or investigated by such. Notable examples may include:

  • The tool scite.ai aims to track and link citations of papers as ‘Supporting’, ‘Mentioning’ or ‘Contrasting’ the study.[156][157][158]
  • The Scite Reference Check bot is an extension of scite.ai that scans new article PDFs «for references to retracted papers, and posts both the citing and retracted papers on Twitter» and also «flags when new studies cite older ones that have issued corrections, errata, withdrawals, or expressions of concern».[158] Studies have suggested as few as 4% of citations to retracted papers clearly recognize the retraction.[158]
  • Search engines like Google Scholar are used to find studies and the notification service Google Alerts enables notifications for new studies matching specified search terms. Scholarly communication infrastructure includes search databases.[159]
  • Shadow library Sci-hub is a topic of metascience[160]
  • Personal knowledge management systems for research-, knowledge- and task management, such as saving information in organized ways[161] with multi-document text editors for future use[162][163] Such systems could be described as part of, along with e.g. Web browser (tabs-addons[164] etc) and search software,[additional citation(s) needed] «mind-machine partnerships» that could be investigated by metascience for how they could improve science.[58]
  • Scholia – efforts to open scholarly publication metadata and use it via Wikidata.[165] (see above)
  • Various software enables common metascientific practices such as bibliometric analysis.[166]
Development

According to a study «a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed» is needed due to reproducibility issues in science.[167][168] A study suggests a tool for screening studies for early warning signs for research fraud.[169]

Medicine[edit]

Clinical research in medicine is often of low quality, and many studies cannot be replicated.[170][171] An estimated 85% of research funding is wasted.[172] Additionally, the presence of bias affects research quality.[173] The pharmaceutical industry exerts substantial influence on the design and execution of medical research. Conflicts of interest are common among authors of medical literature[174] and among editors of medical journals. While almost all medical journals require their authors to disclose conflicts of interest, editors are not required to do so.[175] Financial conflicts of interest have been linked to higher rates of positive study results. In antidepressant trials, pharmaceutical sponsorship is the best predictor of trial outcome.[176]

Blinding is another focus of meta-research, as error caused by poor blinding is a source of experimental bias. Blinding is not well reported in medical literature, and widespread misunderstanding of the subject has resulted in poor implementation of blinding in clinical trials.[177] Furthermore, failure of blinding is rarely measured or reported.[178] Research showing the failure of blinding in antidepressant trials has led some scientists to argue that antidepressants are no better than placebo.[179][180] In light of meta-research showing failures of blinding, CONSORT standards recommend that all clinical trials assess and report the quality of blinding.[181]

Studies have shown that systematic reviews of existing research evidence are sub-optimally used in planning a new research or summarizing the results.[182] Cumulative meta-analyses of studies evaluating the effectiveness of medical interventions have shown that many clinical trials could have been avoided if a systematic review of existing evidence was done prior to conducting a new trial.[183][184][185] For example, Lau et al.[183] analyzed 33 clinical trials (involving 36974 patients) evaluating the effectiveness of intravenous streptokinase for acute myocardial infarction. Their cumulative meta-analysis demonstrated that 25 of 33 trials could have been avoided if a systematic review was conducted prior to conducting a new trial. In other words, randomizing 34542 patients was potentially unnecessary. One study[186] analyzed 1523 clinical trials included in 227 meta-analyses and concluded that «less than one quarter of relevant prior studies» were cited. They also confirmed earlier findings that most clinical trial reports do not present systematic review to justify the research or summarize the results.[186]

Many treatments used in modern medicine have been proven to be ineffective, or even harmful. A 2007 study by John Ioannidis found that it took an average of ten years for the medical community to stop referencing popular practices after their efficacy was unequivocally disproven.[187][188]

Psychology[edit]

Metascience has revealed significant problems in psychological research. The field suffers from high bias, low reproducibility, and widespread misuse of statistics.[189][190][191] The replication crisis affects psychology more strongly than any other field; as many as two-thirds of highly publicized findings may be impossible to replicate.[192] Meta-research finds that 80-95% of psychological studies support their initial hypotheses, which strongly implies the existence of publication bias.[193]

The replication crisis has led to renewed efforts to re-test important findings.[194][195] In response to concerns about publication bias and p-hacking, more than 140 psychology journals have adopted result-blind peer review, in which studies are pre-registered and published without regard for their outcome.[196] An analysis of these reforms estimated that 61 percent of result-blind studies produce null results, in contrast with 5 to 20 percent in earlier research. This analysis shows that result-blind peer review substantially reduces publication bias.[193]

Psychologists routinely confuse statistical significance with practical importance, enthusiastically reporting great certainty in unimportant facts.[197] Some psychologists have responded with an increased use of effect size statistics, rather than sole reliance on the p values.[citation needed]

Physics[edit]

Richard Feynman noted that estimates of physical constants were closer to published values than would be expected by chance. This was believed to be the result of confirmation bias: results that agreed with existing literature were more likely to be believed, and therefore published. Physicists now implement blinding to prevent this kind of bias.[198]

Organizations and institutes[edit]

There are several organizations and universities across the globe which work on meta-research – these include the Meta-Research Innovation Center at Berlin,[199] the Meta-Research Innovation Center at Stanford,[200][201] the Meta-Research Center at Tilburg University, the Meta-research & Evidence Synthesis Unit, The George Institute for Global Health at India and Center for Open Science. Organizations that develop tools for metascience include Our Research, Center for Scientific Integrity and altmetrics companies. There is an annual Metascience Conference.[202]

See also[edit]

  • Accelerating change
  • Citation analysis
  • Epistemology
  • Evidence-based practices
  • Evidence-based medicine
  • Evidence-based policy
  • Further research is needed
  • HARKing
  • Logology (science)
  • Metadata#In science
  • Metatheory
  • Open science
  • Philosophy of science
  • Sociology of scientific knowledge
  • Self-Organized Funding Allocation

References[edit]

  1. ^ a b c d e f g h Ioannidis, John P. A.; Fanelli, Daniele; Dunne, Debbie Drake; Goodman, Steven N. (2 October 2015). «Meta-research: Evaluation and Improvement of Research Methods and Practices». PLOS Biology. 13 (10): e1002264. doi:10.1371/journal.pbio.1002264. ISSN 1544-9173. PMC 4592065. PMID 26431313.
  2. ^ Bach, Becky (8 December 2015). «On communicating science and uncertainty: A podcast with John Ioannidis». Scope. Retrieved 20 May 2019.
  3. ^ Pashler, Harold; Harris, Christine R. (2012). «Is the Replicability Crisis Overblown? Three Arguments Examined». Perspectives on Psychological Science. 7 (6): 531–536. doi:10.1177/1745691612463401. ISSN 1745-6916. PMID 26168109. S2CID 1342421.
  4. ^ Nishikawa-Pacher, Andreas; Heck, Tamara; Schoch, Kerstin (4 October 2022). «Open Editors: A dataset of scholarly journals’ editorial board positions». Research Evaluation. doi:10.1093/reseval/rvac037. eISSN 1471-5449. ISSN 0958-2029.
  5. ^ a b Ioannidis, JP (August 2005). «Why most published research findings are false». PLOS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722.
  6. ^ Schor, Stanley (1966). «Statistical Evaluation of Medical Journal Manuscripts». JAMA: The Journal of the American Medical Association. 195 (13): 1123–1128. doi:10.1001/jama.1966.03100130097026. ISSN 0098-7484. PMID 5952081.
  7. ^ «Highly Cited Researchers». Retrieved September 17, 2015.
  8. ^ Medicine — Stanford Prevention Research Center. John P.A. Ioannidis
  9. ^ Robert Lee Hotz (September 14, 2007). «Most Science Studies Appear to Be Tainted By Sloppy Analysis». Wall Street Journal. Dow Jones & Company. Retrieved 2016-12-05.
  10. ^ Howick J, Koletsi D, Pandis N, Fleming PS, Loef M, Walach H, Schmidt S, Ioannidis JA. The quality of evidence for medical interventions does not improve or worsen: a metaepidemiological study of Cochrane reviews. Journal of Clinical Epidemiology 2020;126:154-159 [1]
  11. ^ «Researching the researchers». Nature Genetics. 46 (5): 417. 2014. doi:10.1038/ng.2972. ISSN 1061-4036. PMID 24769715.
  12. ^ Enserink, Martin (2018). «Research on research». Science. 361 (6408): 1178–1179. Bibcode:2018Sci…361.1178E. doi:10.1126/science.361.6408.1178. ISSN 0036-8075. PMID 30237336. S2CID 206626417.
  13. ^ Rennie, Drummond (1990). «Editorial Peer Review in Biomedical Publication». JAMA. 263 (10): 1317–1441. doi:10.1001/jama.1990.03440100011001. ISSN 0098-7484. PMID 2304208.
  14. ^ Harriman, Stephanie L.; Kowalczuk, Maria K.; Simera, Iveta; Wager, Elizabeth (2016). «A new forum for research on research integrity and peer review». Research Integrity and Peer Review. 1 (1): 5. doi:10.1186/s41073-016-0010-y. ISSN 2058-8615. PMC 5794038. PMID 29451544.
  15. ^ Fanelli, Daniele; Costas, Rodrigo; Ioannidis, John P. A. (2017). «Meta-assessment of bias in science». Proceedings of the National Academy of Sciences of the United States of America. 114 (14): 3714–3719. Bibcode:2017PNAS..114.3714F. doi:10.1073/pnas.1618569114. ISSN 1091-6490. PMC 5389310. PMID 28320937.
  16. ^ Check Hayden, Erika (2013). «Weak statistical standards implicated in scientific irreproducibility». Nature. doi:10.1038/nature.2013.14131. S2CID 211729036. Retrieved 9 May 2019.
  17. ^ Markowitz, David M.; Hancock, Jeffrey T. (2016). «Linguistic obfuscation in fraudulent science». Journal of Language and Social Psychology. 35 (4): 435–445. doi:10.1177/0261927X15614605. S2CID 146174471.
  18. ^ Ding, Y. (2010). «Applying weighted PageRank to author citation networks». Journal of the American Society for Information Science and Technology. 62 (2): 236–245. arXiv:1102.1760. doi:10.1002/asi.21452. S2CID 3752804.
  19. ^ Galipeau, James; Moher, David; Campbell, Craig; Hendry, Paul; Cameron, D. William; Palepu, Anita; Hébert, Paul C. (March 2015). «A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology». Journal of Clinical Epidemiology. 68 (3): 257–265. doi:10.1016/j.jclinepi.2014.09.024. PMID 25510373.
  20. ^ Wilson, Mitch; Moher, David (March 2019). «The Changing Landscape of Journalology in Medicine». Seminars in Nuclear Medicine. 49 (2): 105–114. doi:10.1053/j.semnuclmed.2018.11.009. hdl:10393/38493. PMID 30819390. S2CID 73471103.
  21. ^ a b c Couzin-Frankel, Jennifer (18 September 2018). «‘Journalologists’ use scientific methods to study academic publishing. Is their work improving science?». Science. doi:10.1126/science.aav4758. S2CID 115360831.
  22. ^ Schooler, J. W. (2014). «Metascience could rescue the ‘replication crisis’«. Nature. 515 (7525): 9. Bibcode:2014Natur.515….9S. doi:10.1038/515009a. PMID 25373639.
  23. ^ Smith, Noah (2 November 2017). «Why ‘Statistical Significance’ Is Often Insignificant». Bloomberg.com. Retrieved 7 November 2017.
  24. ^ Pashler, Harold; Wagenmakers, Eric Jan (2012). «Editors’ Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence?». Perspectives on Psychological Science. 7 (6): 528–530. doi:10.1177/1745691612465253. PMID 26168108. S2CID 26361121.
  25. ^ Gary Marcus (May 1, 2013). «The Crisis in Social Psychology That Isn’t». The New Yorker.
  26. ^ Jonah Lehrer (December 13, 2010). «The Truth Wears Off». The New Yorker.
  27. ^ «Dozens of major cancer studies can’t be replicated». Science News. 7 December 2021. Retrieved 19 January 2022.
  28. ^ «Reproducibility Project: Cancer Biology». www.cos.io. Center for Open Science. Retrieved 19 January 2022.
  29. ^ Staddon, John (2017) Scientific Method: How science works, fails to work or pretends to work. Taylor and Francis.
  30. ^ Yeung, Andy W. K. (2017). «Do Neuroscience Journals Accept Replications? A Survey of Literature». Frontiers in Human Neuroscience. 11: 468. doi:10.3389/fnhum.2017.00468. ISSN 1662-5161. PMC 5611708. PMID 28979201.
  31. ^ Martin, G. N.; Clarke, Richard M. (2017). «Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices». Frontiers in Psychology. 8: 523. doi:10.3389/fpsyg.2017.00523. ISSN 1664-1078. PMC 5387793. PMID 28443044.
  32. ^ Binswanger, Mathias (2015). «How Nonsense Became Excellence: Forcing Professors to Publish». In Welpe, Isabell M.; Wollersheim, Jutta; Ringelhan, Stefanie; Osterloh, Margit (eds.). Incentives and Performance. Incentives and Performance: Governance of Research Organizations. Springer International Publishing. pp. 19–32. doi:10.1007/978-3-319-09785-5_2. ISBN 978-3319097855. S2CID 110698382.
  33. ^ Edwards, Marc A.; Roy, Siddhartha (2016-09-22). «Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition». Environmental Engineering Science. 34 (1): 51–61. doi:10.1089/ees.2016.0223. PMC 5206685. PMID 28115824.
  34. ^ Brookshire, Bethany (21 October 2016). «Blame bad incentives for bad science». Science News. Retrieved 11 July 2019.
  35. ^ Smaldino, Paul E.; McElreath, Richard (2016). «The natural selection of bad science». Royal Society Open Science. 3 (9): 160384. arXiv:1605.09511. Bibcode:2016RSOS….360384S. doi:10.1098/rsos.160384. PMC 5043322. PMID 27703703.
  36. ^ a b Chapman, Colin A.; Bicca-Marques, Júlio César; Calvignac-Spencer, Sébastien; Fan, Pengfei; Fashing, Peter J.; Gogarten, Jan; Guo, Songtao; Hemingway, Claire A.; Leendertz, Fabian; Li, Baoguo; Matsuda, Ikki; Hou, Rong; Serio-Silva, Juan Carlos; Chr. Stenseth, Nils (4 December 2019). «Games academics play and their consequences: how authorship, h -index and journal impact factors are shaping the future of academia». Proceedings of the Royal Society B: Biological Sciences. 286 (1916): 20192047. doi:10.1098/rspb.2019.2047. ISSN 0962-8452.
  37. ^ Holcombe, Alex O. (September 2019). «Contributorship, Not Authorship: Use CRediT to Indicate Who Did What». Publications. 7 (3): 48. doi:10.3390/publications7030048.
  38. ^ McNutt, Marcia K.; Bradford, Monica; Drazen, Jeffrey M.; Hanson, Brooks; Howard, Bob; Jamieson, Kathleen Hall; Kiermer, Véronique; Marcus, Emilie; Pope, Barbara Kline; Schekman, Randy; Swaminathan, Sowmya; Stang, Peter J.; Verma, Inder M. (13 March 2018). «Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication». Proceedings of the National Academy of Sciences. 115 (11): 2557–2560. Bibcode:2018PNAS..115.2557M. doi:10.1073/pnas.1715374115. ISSN 0027-8424. PMC 5856527. PMID 29487213.
  39. ^ Brand, Amy; Allen, Liz; Altman, Micah; Hlava, Marjorie; Scott, Jo (1 April 2015). «Beyond authorship: attribution, contribution, collaboration, and credit». Learned Publishing. 28 (2): 151–155. doi:10.1087/20150211. S2CID 45167271.
  40. ^ Singh Chawla, Dalmeet (October 2015). «Digital badges aim to clear up politics of authorship». Nature. 526 (7571): 145–146. Bibcode:2015Natur.526..145S. doi:10.1038/526145a. ISSN 1476-4687. PMID 26432249. S2CID 256770827.
  41. ^ a b c Fire, Michael; Guestrin, Carlos (1 June 2019). «Over-optimization of academic publishing metrics: observing Goodhart’s Law in action». GigaScience. 8 (6): giz053. doi:10.1093/gigascience/giz053. PMC 6541803. PMID 31144712.
  42. ^ a b Elson, Malte; Huff, Markus; Utz, Sonja (1 March 2020). «Metascience on Peer Review: Testing the Effects of a Study’s Originality and Statistical Significance in a Field Experiment». Advances in Methods and Practices in Psychological Science. 3 (1): 53–65. doi:10.1177/2515245919895419. ISSN 2515-2459. S2CID 212778011.
  43. ^ McLean, Robert K D; Sen, Kunal (1 April 2019). «Making a difference in the real world? A meta-analysis of the quality of use-oriented research using the Research Quality Plus approach». Research Evaluation. 28 (2): 123–135. doi:10.1093/reseval/rvy026.
  44. ^ «Bringing Rigor to Relevant Questions: How Social Science Research Can Improve Youth Outcomes in the Real World» (PDF). Retrieved 22 November 2021.
  45. ^ Fecher, Benedikt; Friesike, Sascha; Hebing, Marcel; Linek, Stephanie (20 June 2017). «A reputation economy: how individual reward considerations trump systemic arguments for open access to data». Palgrave Communications. 3 (1): 1–10. doi:10.1057/palcomms.2017.51. ISSN 2055-1045.
  46. ^ La Porta, Caterina AM; Zapperi, Stefano (1 December 2022). «America’s top universities reap the benefit of Italian-trained scientists». Nature Italy. doi:10.1038/d43978-022-00163-5. S2CID 254331807. Retrieved 18 December 2022.
  47. ^ Leydesdorff, L. and Milojevic, S., «Scientometrics» arXiv:1208.4566 (2013), forthcoming in: Lynch, M. (editor), International Encyclopedia of Social and Behavioral Sciences subsection 85030. (2015)
  48. ^ a b Singh, Navinder (8 October 2021). «Plea to publish less». arXiv:2201.07985 [physics.soc-ph].
  49. ^ Manchanda, Saurav; Karypis, George (November 2021). «Evaluating Scholarly Impact: Towards Content-Aware Bibliometrics». Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics: 6041–6053. doi:10.18653/v1/2021.emnlp-main.488. S2CID 243865632.
  50. ^ Manchanda, Saurav; Karypis, George. «Importance Assessment in Scholarly Networks» (PDF).
  51. ^ a b Nielsen, Kristian H. (1 March 2021). «Science and public policy». Metascience. 30 (1): 79–81. doi:10.1007/s11016-020-00581-5. ISSN 1467-9981. PMC 7605730. S2CID 226237994.
  52. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. pp. 229–237. ISBN 978-0199678112.
  53. ^ Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. United Kingdom: Bloomsbury Publishing. p. 200. ISBN 978-1526600219.
  54. ^ «Technology is changing faster than regulators can keep up — here’s how to close the gap». World Economic Forum. Retrieved 27 January 2022.
  55. ^ Overland, Indra; Sovacool, Benjamin K. (1 April 2020). «The misallocation of climate research funding». Energy Research & Social Science. 62: 101349. doi:10.1016/j.erss.2019.101349. ISSN 2214-6296. S2CID 212789228.
  56. ^ «Nobel prize-winning work is concentrated in minority of scientific fields». phys.org. Retrieved 17 August 2020.
  57. ^ Ioannidis, John P. A.; Cristea, Ioana-Alina; Boyack, Kevin W. (29 July 2020). «Work honored by Nobel prizes clusters heavily in a few scientific fields». PLOS ONE. 15 (7): e0234612. Bibcode:2020PLoSO..1534612I. doi:10.1371/journal.pone.0234612. ISSN 1932-6203. PMC 7390258. PMID 32726312.
  58. ^ a b c d e f g h i j k l Fortunato, Santo; Bergstrom, Carl T.; Börner, Katy; Evans, James A.; Helbing, Dirk; Milojević, Staša; Petersen, Alexander M.; Radicchi, Filippo; Sinatra, Roberta; Uzzi, Brian; Vespignani, Alessandro; Waltman, Ludo; Wang, Dashun; Barabási, Albert-László (2 March 2018). «Science of science». Science. 359 (6379): eaao0185. doi:10.1126/science.aao0185. PMC 5949209. PMID 29496846. Retrieved 22 November 2021.
  59. ^ Fajardo-Ortiz, David; Hornbostel, Stefan; Montenegro de Wit, Maywa; Shattuck, Annie (22 June 2022). «Funding CRISPR: Understanding the role of government and philanthropic institutions in supporting academic research within the CRISPR innovation system». Quantitative Science Studies. 3 (2): 443–456. doi:10.1162/qss_a_00187. S2CID 235266330.
  60. ^ «Research questions that could have a big social impact, organised by discipline». 80,000 Hours. Retrieved 31 August 2022.
  61. ^ a b Coley, Alan A (30 August 2017). «Open problems in mathematical physics». Physica Scripta. 92 (9): 093003. arXiv:1710.02105. Bibcode:2017PhyS…92i3003C. doi:10.1088/1402-4896/aa83c1. ISSN 0031-8949. S2CID 3892374.
  62. ^ Adolphs, Ralph (1 April 2015). «The unsolved problems of neuroscience». Trends in Cognitive Sciences. 19 (4): 173–175. doi:10.1016/j.tics.2015.01.007. ISSN 1364-6613. PMC 4574630. PMID 25703689. As for Hilbert’s problems, there is a Wikipedia entry for ‘unsolved problems in neuroscience’; there are more popular writings; and there are books. In trying to brainstorm a list of my own, I read the above sources and asked around. This yields a predictable list ranging from ‘how can we cure psychiatric illness?’ to ‘what is consciousness?’ (Box 1). Asking Caltech faculty added entries about how networks function and what neural computation is. Caltech students had things figured out and got straight to the point (‘how can I sleep less?’, ‘how can we save our species?’, ‘can we become immortal?’).
  63. ^ Dev, Sukhendu B. (1 March 2015). «Unsolved problems in biology—The state of current thinking». Progress in Biophysics and Molecular Biology. 117 (2): 232–239. doi:10.1016/j.pbiomolbio.2015.02.001. ISSN 0079-6107. PMID 25687284. Among many of the responses I received, a large majority mentioned several aspects of neuroscience. This is not surprising since the brain remains the most uncharted area in humans. A list of unsolved problems in neuroscience can be found in http://en.wikipedia.org/wiki/List_of_unsolved_problems_in_neuroscience (Accessed January 12, 2015).
  64. ^ Cartaxo, Bruno; Pinto, Gustavo; Ribeiro, Danilo; Kamei, Fernando; Santos, Ronnie E.S.; da Silva, Fábio Q.B.; Soares, Sérgio (May 2017). «Using Q&A Websites as a Method for Assessing Systematic Reviews». 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR): 238–242. doi:10.1109/MSR.2017.5. ISBN 978-1-5386-1544-7. S2CID 5853766.
  65. ^ Synnot, Anneliese; Bragge, Peter; Lowe, Dianne; Nunn, Jack S; O’Sullivan, Molly; Horvat, Lidia; Tong, Allison; Kay, Debra; Ghersi, Davina; McDonald, Steve; Poole, Naomi; Bourke, Noni; Lannin, Natasha; Vadasz, Danny; Oliver, Sandy; Carey, Karen; Hill, Sophie J (May 2018). «Research priorities in health communication and participation: international survey of consumers and other stakeholders». BMJ Open. 8 (5): e019481. doi:10.1136/bmjopen-2017-019481. PMC 5942413. PMID 29739780.
  66. ^ Synnot, Anneliese J.; Tong, Allison; Bragge, Peter; Lowe, Dianne; Nunn, Jack S.; O’Sullivan, Molly; Horvat, Lidia; Kay, Debra; Ghersi, Davina; McDonald, Steve; Poole, Naomi; Bourke, Noni; Lannin, Natasha A.; Vadasz, Danny; Oliver, Sandy; Carey, Karen; Hill, Sophie J. (29 April 2019). «Selecting, refining and identifying priority Cochrane Reviews in health communication and participation in partnership with consumers and other stakeholders». Health Research Policy and Systems. 17 (1): 45. doi:10.1186/s12961-019-0444-z. PMC 6489310. PMID 31036016.
  67. ^ Salerno, Reynolds M.; Gaudioso, Jennifer; Brodsky, Benjamin H. (2007). «Preface». Laboratory Biosecurity Handbook (Illustrated ed.). CRC Press. p. xi. ISBN 9781420006209. Retrieved 23 May 2020.
  68. ^ Piper, Kelsey (2022-04-05). «Why experts are terrified of a human-made pandemic — and what we can do to stop it». Vox. Retrieved 2022-04-08.
  69. ^ Ord, Toby (2020-03-06). «Why we need worst-case thinking to prevent pandemics». The Guardian. ISSN 0261-3077. Retrieved 2020-04-11. This is an edited extract from The Precipice: Existential Risk and the Future of Humanity
  70. ^ Ord, Toby (2021-03-23). «Covid-19 has shown humanity how close we are to the edge». The Guardian. ISSN 0261-3077. Retrieved 2021-03-26.{{cite news}}: CS1 maint: url-status (link)
  71. ^ «Forschung an Krankheitserregern soll sicherer werden». www.sciencemediacenter.de. Retrieved 17 January 2023.
  72. ^ Pannu, Jaspreet; Palmer, Megan J.; Cicero, Anita; Relman, David A.; Lipsitch, Marc; Inglesby, Tom (16 December 2022). «Strengthen oversight of risky research on pathogens». Science. 378 (6625): 1170–1172. Bibcode:2022Sci…378.1170P. doi:10.1126/science.adf6020. ISSN 0036-8075. PMID 36480598. S2CID 254998228.
    • University press release: «Stanford Researchers Recommend Stronger Oversight of Risky Research on Pathogens». Stanford University. Retrieved 17 January 2023.

  73. ^ a b «Science as a Global Public Good». International Science Council. 8 October 2021. Retrieved 22 November 2021.
  74. ^ Jamieson, Kathleen Hall; Kahan, Dan; Scheufele, Dietram A. (17 May 2017). The Oxford Handbook of the Science of Science Communication. Oxford University Press. ISBN 978-0190497637.
  75. ^ Grochala, Rafał (16 December 2019). «Science communication in online media: influence of press releases on coverage of genetics and CRISPR». doi:10.1101/2019.12.13.875278. S2CID 213125031.
  76. ^ «FRAMING ANALYSIS OF NEWS COVERAGE ON RENEWABLE ENERGYIN THE STAR ONLINE NEWS PORTAL» (PDF). Retrieved 22 November 2021.
  77. ^ MacLaughlin, Ansel; Wihbey, John; Smith, David (15 June 2018). «Predicting News Coverage of Scientific Articles». Proceedings of the International AAAI Conference on Web and Social Media. 12 (1). doi:10.1609/icwsm.v12i1.14999. ISSN 2334-0770. S2CID 49412893.
  78. ^ Carrigan, Mark; Jordan, Katy (4 November 2021). «Platforms and Institutions in the Post-Pandemic University: a Case Study of Social Media and the Impact Agenda». Postdigital Science and Education. 4 (2): 354–372. doi:10.1007/s42438-021-00269-x. ISSN 2524-4868. S2CID 243760357.
  79. ^ Baykoucheva, Svetla (2015). «Measuring attention». Managing Scientific Information and Research Data: 127–136. doi:10.1016/B978-0-08-100195-0.00014-7. ISBN 978-0081001950.
  80. ^ a b c Zagorova, Olga; Ulloa, Roberto; Weller, Katrin; Flöck, Fabian (12 April 2022). ««I updated the <ref>»: The evolution of references in the English Wikipedia and the implications for altmetrics». Quantitative Science Studies. 3 (1): 147–173. doi:10.1162/qss_a_00171. S2CID 222177064.
  81. ^ Williams, Ann E. (12 June 2017). «Altmetrics: an overview and evaluation». Online Information Review. 41 (3): 311–317. doi:10.1108/OIR-10-2016-0294.
  82. ^ a b c Gurevitch, Jessica; Koricheva, Julia; Nakagawa, Shinichi; Stewart, Gavin (March 2018). «Meta-analysis and the science of research synthesis». Nature. 555 (7695): 175–182. Bibcode:2018Natur.555..175G. doi:10.1038/nature25753. ISSN 1476-4687. PMID 29517004. S2CID 3761687.
  83. ^ Balbi, Stefano; Bagstad, Kenneth J.; Magrach, Ainhoa; Sanz, Maria Jose; Aguilar-Amuchastegui, Naikoa; Giupponi, Carlo; Villa, Ferdinando (17 February 2022). «The global environmental agenda urgently needs a semantic web of knowledge». Environmental Evidence. 11 (1): 5. doi:10.1186/s13750-022-00258-y. ISSN 2047-2382. S2CID 246872765.
  84. ^ a b Khalil, Mohammed M. (2016). «Improving Science for a Better Future». How Should Humanity Steer the Future?. The Frontiers Collection. Springer International Publishing: 113–126. doi:10.1007/978-3-319-20717-9_11. ISBN 978-3-319-20716-2.
  85. ^ «How Do Science Journalists Evaluate Psychology Research?». psyarxiv.com.
  86. ^ Dunlop, Lynda; Veneu, Fernanda (1 September 2019). «Controversies in Science». Science & Education. 28 (6): 689–710. doi:10.1007/s11191-019-00048-y. ISSN 1573-1901. S2CID 255016078.
  87. ^ Norsen, Travis (2016). «Back to the Future: Crowdsourcing Innovation by Refocusing Science Education». How Should Humanity Steer the Future?. The Frontiers Collection: 85–95. doi:10.1007/978-3-319-20717-9_9. ISBN 978-3-319-20716-2.
  88. ^ Bschir, Karim (July 2021). «How to make sense of science: Mano Singham: The great paradox of science: why its conclusions can be relied upon even though they cannot be proven. Oxford: Oxford University Press, 2019, 332 pp, £ 22.99 HB». Metascience. 30 (2): 327–330. doi:10.1007/s11016-021-00654-z. S2CID 254792908.
  89. ^ «Correcting misconceptions — Understanding Science». 21 April 2022. Retrieved 25 January 2023.
  90. ^ Philipp-Muller, Aviva; Lee, Spike W. S.; Petty, Richard E. (26 July 2022). «Why are people antiscience, and what can we do about it?». Proceedings of the National Academy of Sciences. 119 (30): e2120755119. Bibcode:2022PNAS..11920755P. doi:10.1073/pnas.2120755119. ISSN 0027-8424. PMC 9335320. PMID 35858405.
  91. ^ «The 4 bases of anti-science beliefs – and what to do about them». SCIENMAG: Latest Science and Health News. 11 July 2022. Retrieved 25 January 2023.
  92. ^ Hotez, Peter J. «The Antiscience Movement Is Escalating, Going Global and Killing Thousands». Scientific American. Retrieved 25 January 2023.
  93. ^ a b c d e f Park, Michael; Leahey, Erin; Funk, Russell J. (January 2023). «Papers and patents are becoming less disruptive over time». Nature. 613 (7942): 138–144. Bibcode:2023Natur.613..138P. doi:10.1038/s41586-022-05543-x. ISSN 1476-4687. PMID 36600070. S2CID 255466666.
  94. ^ Ginsparg, Paul (September 2021). «Lessons from arXiv’s 30 years of information sharing». Nature Reviews Physics. 3 (9): 602–603. doi:10.1038/s42254-021-00360-z. PMC 8335983. PMID 34377944.
  95. ^ «Nature Journals To Charge Authors Hefty Fee To Make Scientific Papers Open Access». IFLScience. Retrieved 22 November 2021.
  96. ^ «Harvard University says it can’t afford journal publishers’ prices». The Guardian. 24 April 2012. Retrieved 22 November 2021.
  97. ^ Van Noorden, Richard (1 March 2013). «Open access: The true cost of science publishing». Nature. 495 (7442): 426–429. Bibcode:2013Natur.495..426V. doi:10.1038/495426a. ISSN 1476-4687. PMID 23538808. S2CID 27021567.
  98. ^ Tennant, Jonathan P.; Waldner, François; Jacques, Damien C.; Masuzzo, Paola; Collister, Lauren B.; Hartgerink, Chris. H. J. (21 September 2016). «The academic, economic and societal impacts of Open Access: an evidence-based review». F1000Research. 5: 632. doi:10.12688/f1000research.8460.3. PMC 4837983. PMID 27158456.
  99. ^ «Paywall: The business of scholarship review – analysis of a scandal». New Scientist. Retrieved 28 January 2023.
  100. ^ Powell, Kendall (1 February 2016). «Does it take too long to publish research?». Nature. 530 (7589): 148–151. doi:10.1038/530148a. PMID 26863966. S2CID 1013588. Retrieved 28 January 2023.
  101. ^ «Open peer review: bringing transparency, accountability, and inclusivity to the peer review process». Impact of Social Sciences. 13 September 2017. Retrieved 28 January 2023.
  102. ^ Dattani, Saloni. «The Pandemic Uncovered Ways to Speed Up Science». Wired. Retrieved 28 January 2023.
  103. ^ «Speeding up the publication process at PLOS ONE». EveryONE. 13 May 2019. Retrieved 28 January 2023.
  104. ^ a b «Open Alex Data Evolution». observablehq.com. 8 February 2022. Retrieved 18 February 2022.
  105. ^ Singh Chawla, Dalmeet (24 January 2022). «Massive open index of scholarly papers launches». Nature. doi:10.1038/d41586-022-00138-y. Retrieved 14 February 2022.
  106. ^ «OpenAlex: The Promising Alternative to Microsoft Academic Graph». Singapore Management University (SMU). Retrieved 14 February 2022.
  107. ^ «OpenAlex Documentation». Retrieved 18 February 2022.
  108. ^ a b Waagmeester, Andra; Willighagen, Egon L.; Su, Andrew I.; Kutmon, Martina; Gayo, Jose Emilio Labra; Fernández-Álvarez, Daniel; Groom, Quentin; Schaap, Peter J.; Verhagen, Lisa M.; Koehorst, Jasper J. (22 January 2021). «A protocol for adding knowledge to Wikidata: aligning resources on human coronaviruses». BMC Biology. 19 (1): 12. doi:10.1186/s12915-020-00940-y. ISSN 1741-7007. PMC 7820539. PMID 33482803.
  109. ^ Jin, Ching; Ma, Yifang; Uzzi, Brian (5 October 2021). «Scientific prizes and the extraordinary growth of scientific topics». Nature Communications. 12 (1): 5619. arXiv:2012.09269. Bibcode:2021NatCo..12.5619J. doi:10.1038/s41467-021-25712-2. ISSN 2041-1723. PMC 8492701. PMID 34611161.
  110. ^ «Scholia – biomarker». Retrieved 28 January 2023.
  111. ^ Bornmann, Lutz; Haunschild, Robin; Mutz, Rüdiger (7 October 2021). «Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases». Humanities and Social Sciences Communications. 8 (1): 1–15. doi:10.1057/s41599-021-00903-w. ISSN 2662-9992. S2CID 229156128.
  112. ^ a b Thompson, Derek (1 December 2021). «America Is Running on Fumes». The Atlantic. Retrieved 27 January 2023.
  113. ^ Collison, Patrick; Nielsen, Michael (16 November 2018). «Science Is Getting Less Bang for Its Buck». The Atlantic. Retrieved 27 January 2023.
  114. ^ a b «How to escape scientific stagnation». The Economist. Retrieved 25 January 2023.
  115. ^ a b c Bhattacharya, Jay; Packalen, Mikko (February 2020). «Stagnation and Scientific Incentives» (PDF). National Bureau of Economic Research.
  116. ^ Tejada, Patricia Contreras (13 January 2023). «With fewer disruptive studies, is science becoming an echo chamber?». Advanced Science News. Archived from the original on 15 February 2023. Retrieved 15 February 2023.
  117. ^
  118. ^ Petrovich, Eugenio (2020). «Science mapping». www.isko.org. Retrieved 27 January 2023.
  119. ^ Chen, Chaomei (21 March 2017). «Science Mapping: A Systematic Review of the Literature». Journal of Data and Information Science. 2 (2): 1–40. doi:10.1515/jdis-2017-0006. S2CID 57737772.
  120. ^ Gutiérrez-Salcedo, M.; Martínez, M. Ángeles; Moral-Munoz, J. A.; Herrera-Viedma, E.; Cobo, M. J. (1 May 2018). «Some bibliometric procedures for analyzing and evaluating research fields». Applied Intelligence. 48 (5): 1275–1287. doi:10.1007/s10489-017-1105-y. ISSN 1573-7497. S2CID 254227914.
  121. ^ Navarro, V. (31 March 2008). «Politics and health: a neglected area of research». The European Journal of Public Health. 18 (4): 354–355. doi:10.1093/eurpub/ckn040. PMID 18524802.
  122. ^ Farley-Ripple, Elizabeth N.; Oliver, Kathryn; Boaz, Annette (7 September 2020). «Mapping the community: use of research evidence in policy and practice». Humanities and Social Sciences Communications. 7 (1): 1–10. doi:10.1057/s41599-020-00571-2. ISSN 2662-9992.
  123. ^ a b Lamers, Wout S; Boyack, Kevin; Larivière, Vincent; Sugimoto, Cassidy R; van Eck, Nees Jan; Waltman, Ludo; Murray, Dakota (24 December 2021). «Investigating disagreement in the scientific literature». eLife. 10: e72737. doi:10.7554/eLife.72737. ISSN 2050-084X. PMC 8709576. PMID 34951588.
  124. ^ LeLorier J, Grégoire G, Benhaddad A, Lapierre J, Derderian F (August 1997). «Discrepancies between meta-analyses and subsequent large randomized, controlled trials». The New England Journal of Medicine. 337 (8): 536–542. doi:10.1056/NEJM199708213370806. PMID 9262498.
  125. ^ a b Slavin RE (1986). «Best-Evidence Synthesis: An Alternative to Meta-Analytic and Traditional Reviews». Educational Researcher. 15 (9): 5–9. doi:10.3102/0013189X015009005. S2CID 146457142.
  126. ^ Hunter JE, Schmidt FL, Jackson GB, et al. (American Psychological Association. Division of Industrial-Organizational Psychology) (1982). Meta-analysis: cumulating research findings across studies. Beverly Hills, California: Sage. ISBN 978-0-8039-1864-1.
  127. ^ Glass GV, McGaw B, Smith ML (1981). Meta-analysis in social research. Beverly Hills, California: Sage Publications. ISBN 978-0-8039-1633-3.
  128. ^ Stone, Dianna L.; Rosopa, Patrick J. (1 March 2017). «The Advantages and Limitations of Using Meta-analysis in Human Resource Management Research». Human Resource Management Review. 27 (1): 1–7. doi:10.1016/j.hrmr.2016.09.001. ISSN 1053-4822.
  129. ^ Elliott, Julian; Lawrence, Rebecca; Minx, Jan C.; Oladapo, Olufemi T.; Ravaud, Philippe; Tendal Jeppesen, Britta; Thomas, James; Turner, Tari; Vandvik, Per Olav; Grimshaw, Jeremy M. (December 2021). «Decision makers need constantly updated evidence synthesis». Nature. 600 (7889): 383–385. Bibcode:2021Natur.600..383E. doi:10.1038/d41586-021-03690-1. PMID 34912079. S2CID 245220047.
  130. ^ Snyder, Alison (14 October 2021). «New ideas are struggling to emerge from the sea of science». Axios. Retrieved 15 November 2021.
  131. ^ Chu, Johan S. G.; Evans, James A. (12 October 2021). «Slowed canonical progress in large fields of science». Proceedings of the National Academy of Sciences. 118 (41): e2021636118. Bibcode:2021PNAS..11821636C. doi:10.1073/pnas.2021636118. ISSN 0027-8424. PMC 8522281. PMID 34607941.
  132. ^ «Sharing of tacit knowledge is most important aspect of mentorship, study finds». phys.org. Retrieved 4 July 2020.
  133. ^ Ma, Yifang; Mukherjee, Satyam; Uzzi, Brian (23 June 2020). «Mentorship and protégé success in STEM fields». Proceedings of the National Academy of Sciences. 117 (25): 14077–14083. Bibcode:2020PNAS..11714077M. doi:10.1073/pnas.1915516117. ISSN 0027-8424. PMC 7322065. PMID 32522881.
  134. ^ «Science of Science authors hope to spark conversations about the scientific enterprise». phys.org. Retrieved 28 January 2023.
  135. ^ van Dijk, Peter J.; Jessop, Adrienne P.; Ellis, T. H. Noel (July 2022). «How did Mendel arrive at his discoveries?». Nature Genetics. 54 (7): 926–933. doi:10.1038/s41588-022-01109-9. ISSN 1546-1718. PMID 35817970. S2CID 250454204.
  136. ^ Root-Bernstein, Robert S.; Bernstein, Maurine; Garnier, Helen (1 April 1995). «Correlations Between Avocations, Scientific Style, Work Habits, and Professional Impact of Scientists». Creativity Research Journal. 8 (2): 115–137. doi:10.1207/s15326934crj0802_2. ISSN 1040-0419.
  137. ^ Ince, Sharon; Hoadley, Christopher; Kirschner, Paul A. (1 January 2022). «A qualitative study of social sciences faculty research workflows». Journal of Documentation. 78 (6): 1321–1337. doi:10.1108/JD-08-2021-0168. ISSN 0022-0418.
  138. ^ Nassi-Calò, Lilian (3 April 2014). «Researchers reading habits for scientific literature | SciELO in Perspective». Retrieved 25 February 2023.
  139. ^ Van Noorden, Richard (3 February 2014). «Scientists may be reaching a peak in reading habits». Nature. doi:10.1038/nature.2014.14658. Retrieved 25 February 2023.
  140. ^ Arshad, Alia; Ameen, Kanwal (1 January 2021). «Comparative analysis of academic scientists, social scientists and humanists’ scholarly information seeking habits». The Journal of Academic Librarianship. 47 (1): 102297. doi:10.1016/j.acalib.2020.102297. ISSN 0099-1333.
  141. ^ «Why it pays to join a big research group if you want to be more scientifically productive». Physics World. 24 November 2022. Retrieved 13 December 2022.
  142. ^ Zhang, Sam; Wapman, K. Hunter; Larremore, Daniel B.; Clauset, Aaron (16 November 2022). «Labor advantages drive the greater productivity of faculty at elite universities». Science Advances. 8 (46): eabq7056. arXiv:2204.05989. Bibcode:2022SciA….8.7056Z. doi:10.1126/sciadv.abq7056. ISSN 2375-2548. PMC 9674273. PMID 36399560.
  143. ^ «Academic Incentives and Research Impact: Developing Reward and Recognition Systems to Better People’s Lives». DORA. Retrieved 28 January 2023.
  144. ^ Collison, Patrick; Cowen, Tyler (30 July 2019). «We Need a New Science of Progress». The Atlantic. Retrieved 25 January 2023.
  145. ^ Lovely, Garrison. «Do we need a better understanding of ‘progress’?». BBC. Retrieved 27 January 2023.
  146. ^ Niehaus, Paul; Williams, Heidi. «Developing the science of science». Works in Progress. Retrieved 25 January 2023.
  147. ^ «Registered Replication Reports». Association for Psychological Science. Retrieved 2015-11-13.
  148. ^ Chambers, Chris (2014-05-20). «Psychology’s ‘registration revolution’«. the Guardian. Retrieved 2015-11-13.
  149. ^ Simera, I; Moher, D; Hirst, A; Hoey, J; Schulz, KF; Altman, DG (2010). «Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network». BMC Medicine. 8: 24. doi:10.1186/1741-7015-8-24. PMC 2874506. PMID 20420659.
  150. ^ Simera, I.; Moher, D.; Hoey, J.; Schulz, K. F.; Altman, D. G. (2010). «A catalogue of reporting guidelines for health research». European Journal of Clinical Investigation. 40 (1): 35–53. doi:10.1111/j.1365-2362.2009.02234.x. PMID 20055895.
  151. ^ Simera, I; Altman, DG (October 2009). «Writing a research article that is «fit for purpose»: EQUATOR Network and reporting guidelines». Evidence-Based Medicine. 14 (5): 132–134. doi:10.1136/ebm.14.5.132. PMID 19794009. S2CID 36739841.
  152. ^ Ep. 49: Joel Chan on metascience, creativity, and tools for thought.
  153. ^ «Risk of Bias Tool | Cochrane Bias». methods.cochrane.org. Retrieved 25 January 2023.
  154. ^ Prasad, Vinay; Ioannidis, John P. A. (November 2022). «Constructive and obsessive criticism in science». European Journal of Clinical Investigation. 52 (11): e13839. doi:10.1111/eci.13839. ISSN 0014-2972. PMC 9787955. PMID 35869811.
  155. ^ Khamsi, Roxanne (1 May 2020). «Coronavirus in context: Scite.ai tracks positive and negative citations for COVID-19 literature». Nature. doi:10.1038/d41586-020-01324-6. Retrieved 19 February 2022.
  156. ^ Nicholson, Josh M.; Mordaunt, Milo; Lopez, Patrice; Uppala, Ashish; Rosati, Domenic; Rodrigues, Neves P.; Grabitz, Peter; Rife, Sean C. (5 November 2021). «scite: A smart citation index that displays the context of citations and classifies their intent using deep learning». Quantitative Science Studies. 2 (3): 882–898. doi:10.1162/qss_a_00146. S2CID 232283218.
  157. ^ a b c «New bot flags scientific studies that cite retracted papers». Nature Index. Retrieved 25 January 2023.
  158. ^
  159. ^ Segado-Boj, Francisco; Martín-Quevedo, Juan; Prieto-Gutiérrez, Juan-José (12 December 2022). «Jumping over the paywall: Strategies and motivations for scholarly piracy and other alternatives» (PDF). Information Development. doi:10.1177/02666669221144429. ISSN 0266-6669. S2CID 254564205.
  160. ^ Gosztyla, Maya (7 July 2022). «How to find, read and organize papers». Nature. doi:10.1038/d41586-022-01878-7. PMID 35804061. S2CID 250388551. Retrieved 28 January 2023.
  161. ^ Fastrez, Pierre; Jacques, Jerry (2015). «Managing References by Filing and Tagging». Human Interface and the Management of Information. Information and Knowledge Design. Lecture Notes in Computer Science. Springer International Publishing. 9172: 291–300. doi:10.1007/978-3-319-20612-7_28. ISBN 978-3-319-20611-0.
  162. ^ Chaudhry, Abdus Sattar; Alajmi, Bibi M. (1 January 2022). «Personal information management practices: how scientists find and organize information». Global Knowledge, Memory and Communication. ahead-of-print (ahead-of-print). doi:10.1108/GKMC-04-2022-0082. S2CID 253363619.
  163. ^ Chang, Joseph Chee; Kim, Yongsung; Miller, Victor; Liu, Michael Xieyang; Myers, Brad A; Kittur, Aniket (12 October 2021). «Tabs.do: Task-Centric Browser Tab Management». The 34th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery: 663–676. doi:10.1145/3472749.3474777. ISBN 9781450386357. S2CID 237102658.
  164. ^ Rasberry, Lane; Tibbs, Sheri; Hoos, William; Westermann, Amy; Keefer, Jeffrey; Baskauf, Steven James; Anderson, Clifford; Walker, Philip; Kwok, Cherrie; Mietchen, Daniel (4 April 2022). «WikiProject Clinical Trials for Wikidata». doi:10.1101/2022.04.01.22273328. S2CID 247936371.
  165. ^ Moral-Muñoz, José A.; Herrera-Viedma, Enrique; Santisteban-Espejo, Antonio; Cobo, Manuel J. (19 January 2020). «Software tools for conducting bibliometric analysis in science: An up-to-date review». El Profesional de la Información. 29 (1). doi:10.3145/epi.2020.ene.03. S2CID 210926828.
  166. ^ «A new replication crisis: Research that is less likely to be true is cited more». phys.org. Retrieved 14 June 2021.
  167. ^ Serra-Garcia, Marta; Gneezy, Uri (2021-05-01). «Nonreplicable publications are cited more than replicable ones». Science Advances. 7 (21): eabd1705. Bibcode:2021SciA….7D1705S. doi:10.1126/sciadv.abd1705. ISSN 2375-2548. PMC 8139580. PMID 34020944.
  168. ^ Parker, Lisa; Boughton, Stephanie; Lawrence, Rosa; Bero, Lisa (1 November 2022). «Experts identified warning signs of fraudulent research: a qualitative study to inform a screening tool». Journal of Clinical Epidemiology. 151: 1–17. doi:10.1016/j.jclinepi.2022.07.006. PMID 35850426. S2CID 250632662.
  169. ^ Ioannidis, JPA (2016). «Why Most Clinical Research Is Not Useful». PLOS Med. 13 (6): e1002049. doi:10.1371/journal.pmed.1002049. PMC 4915619. PMID 27328301.
  170. ^ Ioannidis JA (13 July 2005). «Contradicted and initially stronger effects in highly cited clinical research». JAMA. 294 (2): 218–228. doi:10.1001/jama.294.2.218. PMID 16014596.
  171. ^ Chalmers, Iain; Glasziou, Paul (2009). «Avoidable waste in the production and reporting of research evidence». The Lancet. 374 (9683): 86–89. doi:10.1016/S0140-6736(09)60329-9. ISSN 0140-6736. PMID 19525005. S2CID 11797088.
  172. ^ June 24, Jeremy Hsu; ET, Jeremy Hsu (24 June 2010). «Dark Side of Medical Research: Widespread Bias and Omissions». Live Science. Retrieved 24 May 2019.
  173. ^ «Confronting conflict of interest». Nature Medicine. 24 (11): 1629. November 2018. doi:10.1038/s41591-018-0256-7. ISSN 1546-170X. PMID 30401866.
  174. ^ Haque, Waqas; Minhajuddin, Abu; Gupta, Arjun; Agrawal, Deepak (2018). «Conflicts of interest of editors of medical journals». PLOS ONE. 13 (5): e0197141. Bibcode:2018PLoSO..1397141H. doi:10.1371/journal.pone.0197141. ISSN 1932-6203. PMC 5959187. PMID 29775468.
  175. ^ Moncrieff, J (March 2002). «The antidepressant debate». The British Journal of Psychiatry. 180 (3): 193–194. doi:10.1192/bjp.180.3.193. ISSN 0007-1250. PMID 11872507.
  176. ^ Bello, S; Moustgaard, H; Hróbjartsson, A (October 2014). «The risk of unblinding was infrequently and incompletely reported in 300 randomized clinical trial publications». Journal of Clinical Epidemiology. 67 (10): 1059–1069. doi:10.1016/j.jclinepi.2014.05.007. ISSN 1878-5921. PMID 24973822.
  177. ^ Tuleu, Catherine; Legay, Helene; Orlu-Gul, Mine; Wan, Mandy (1 September 2013). «Blinding in pharmacological trials: the devil is in the details». Archives of Disease in Childhood. 98 (9): 656–659. doi:10.1136/archdischild-2013-304037. ISSN 0003-9888. PMC 3833301. PMID 23898156.
  178. ^ Kirsch, I (2014). «Antidepressants and the Placebo Effect». Zeitschrift für Psychologie. 222 (3): 128–134. doi:10.1027/2151-2604/a000176. ISSN 2190-8370. PMC 4172306. PMID 25279271.
  179. ^ Ioannidis, John PA (27 May 2008). «Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials?». Philosophy, Ethics, and Humanities in Medicine. 3: 14. doi:10.1186/1747-5341-3-14. ISSN 1747-5341. PMC 2412901. PMID 18505564.
  180. ^ Moher, David; Altman, Douglas G.; Schulz, Kenneth F. (24 March 2010). «CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials». BMJ. 340: c332. doi:10.1136/bmj.c332. ISSN 0959-8138. PMC 2844940. PMID 20332509.
  181. ^ Clarke, Michael; Chalmers, Iain (1998). «Discussion Sections in Reports of Controlled Trials Published in General Medical Journals». JAMA. 280 (3): 280–282. doi:10.1001/jama.280.3.280. PMID 9676682.
  182. ^ a b Lau, Joseph; Antman, Elliott M; Jimenez-Silva, Jeanette; Kupelnick, Bruce; Mosteller, Frederick; Chalmers, Thomas C (1992). «Cumulative Meta-Analysis of Therapeutic Trials for Myocardial Infarction». New England Journal of Medicine. 327 (4): 248–254. doi:10.1056/NEJM199207233270406. PMID 1614465.
  183. ^ Fergusson, Dean; Glass, Kathleen Cranley; Hutton, Brian; Shapiro, Stan (2016). «Randomized controlled trials of aprotinin in cardiac surgery: Could clinical equipoise have stopped the bleeding?». Clinical Trials. 2 (3): 218–229, discussion 229–232. doi:10.1191/1740774505cn085oa. PMID 16279145. S2CID 31375469.
  184. ^ Clarke, Mike; Brice, Anne; Chalmers, Iain (2014). «Accumulating Research: A Systematic Account of How Cumulative Meta-Analyses Would Have Provided Knowledge, Improved Health, Reduced Harm and Saved Resources». PLOS ONE. 9 (7): e102670. Bibcode:2014PLoSO…9j2670C. doi:10.1371/journal.pone.0102670. PMC 4113310. PMID 25068257.
  185. ^ a b Robinson, Karen A; Goodman, Steven N (2011). «A Systematic Examination of the Citation of Prior Research in Reports of Randomized, Controlled Trials». Annals of Internal Medicine. 154 (1): 50–55. doi:10.7326/0003-4819-154-1-201101040-00007. PMID 21200038. S2CID 207536137.
  186. ^ Epstein, David. «When Evidence Says No, but Doctors Say Yes — The Atlantic». Pocket. Retrieved 10 April 2020.
  187. ^ Tatsioni, A; Bonitsis, NG; Ioannidis, JP (5 December 2007). «Persistence of contradicted claims in the literature». JAMA. 298 (21): 2517–2526. doi:10.1001/jama.298.21.2517. ISSN 1538-3598. PMID 18056905.
  188. ^ Franco, Annie; Malhotra, Neil; Simonovits, Gabor (1 January 2016). «Underreporting in Psychology Experiments: Evidence From a Study Registry». Social Psychological and Personality Science. 7 (1): 8–12. doi:10.1177/1948550615598377. ISSN 1948-5506. S2CID 143182733.
  189. ^ Munafò, Marcus (29 March 2017). «Metascience: Reproducibility blues». Nature. 543 (7647): 619–620. Bibcode:2017Natur.543..619M. doi:10.1038/543619a. ISSN 1476-4687.
  190. ^ Stokstad, Erik (20 September 2018). «This research group seeks to expose weaknesses in science – and they’ll step on some toes if they have to». Science. doi:10.1126/science.aav4784. S2CID 158525979.
  191. ^ Open Science Collaboration (2015). «Estimating the reproducibility of psychological science» (PDF). Science. 349 (6251): aac4716. doi:10.1126/science.aac4716. hdl:10722/230596. PMID 26315443. S2CID 218065162.
  192. ^ a b Allen, Christopher P G.; Mehler, David Marc Anton. «Open Science challenges, benefits and tips in early career and beyond». doi:10.31234/osf.io/3czyt. S2CID 240061030.
  193. ^ Simmons, Joseph P.; Nelson, Leif D.; Simonsohn, Uri (2011). «False-Positive Psychology». Psychological Science. 22 (11): 1359–1366. doi:10.1177/0956797611417632. PMID 22006061.
  194. ^ Stroebe, Wolfgang; Strack, Fritz (2014). «The Alleged Crisis and the Illusion of Exact Replication» (PDF). Perspectives on Psychological Science. 9 (1): 59–71. doi:10.1177/1745691613514450. PMID 26173241. S2CID 31938129.
  195. ^ Aschwanden, Christie (6 December 2018). «Psychology’s Replication Crisis Has Made The Field Better». FiveThirtyEight. Retrieved 19 December 2018.
  196. ^ Cohen, Jacob (1994). «The earth is round (p < .05)». American Psychologist. 49 (12): 997–1003. doi:10.1037/0003-066X.49.12.997. S2CID 380942.
  197. ^ MacCoun, Robert; Perlmutter, Saul (8 October 2015). «Blind analysis: Hide results to seek the truth». Nature. 526 (7572): 187–189. Bibcode:2015Natur.526..187M. doi:10.1038/526187a. PMID 26450040.
  198. ^ Berlin, Meta-Research Innovation Center. «Meta-Research Innovation Center Berlin». Meta-Research Innovation Center Berlin. Retrieved 2021-12-06.
  199. ^ «Home | Meta-research Innovation Center at Stanford». metrics.stanford.edu. Retrieved 2021-12-06.
  200. ^ «Meta-research and Evidence Synthesis Unit». The George Institute for Global Health. Retrieved 2021-12-19.
  201. ^ «Metascience 2021». Metascience 2021. Retrieved 20 February 2022.

Further reading[edit]

  • Lydia Denworth, «A Significant Problem: Standard scientific methods are under fire. Will anything change?», Scientific American, vol. 321, no. 4 (October 2019), pp. 62–67.
    • «The use of p values for nearly a century [since 1925] to determine statistical significance of experimental results has contributed to an illusion of certainty and [to] reproducibility crises in many scientific fields. There is growing determination to reform statistical analysis… Some [researchers] suggest changing statistical methods, whereas others would do away with a threshold for defining «significant» results.» (p. 63.)
  • Harris, Richard (2017). Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions. Basic Books. ISBN 978-0465097913.
  • Fortunato, Santo; Bergstrom, Carl T.; et al. (2 March 2018). «Science of science». Science. 359 (6379): eaao0185. doi:10.1126/science.aao0185. PMC 5949209. PMID 29496846.

External links[edit]

Journals

  • Minerva: A Journal of Science, Learning and Policy
  • Research Integrity and Peer Review
  • Research Policy
  • Science and Public Policy

Conferences

  • Annual Metascience Conference

Metascience (also known as meta-research) is the use of scientific methodology to study science itself. Metascience seeks to increase the quality of scientific research while reducing inefficiency. It is also known as «research on research» and «the science of science«, as it uses research methods to study how research is done and find where improvements can be made. Metascience concerns itself with all fields of research and has been described as «a bird’s eye view of science».[1] In the words of John Ioannidis, «Science is the best thing that has happened to human beings … but we can do it better.»[2]

In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten high-profile medical journals. It found that «in almost 73% of the reports read … conclusions were drawn when the justification for these conclusions was invalid.» Meta-research in the following decades found many methodological flaws, inefficiencies, and poor practices in research across numerous scientific fields. Many scientific studies could not be reproduced, particularly in medicine and the soft sciences. The term «replication crisis» was coined in the early 2010s as part of a growing awareness of the problem.[3]

Measures have been implemented to address the issues revealed by metascience. These measures include the pre-registration of scientific studies and clinical trials as well as the founding of organizations such as CONSORT and the EQUATOR Network that issue guidelines for methodology and reporting. There are continuing efforts to reduce the misuse of statistics, to eliminate perverse incentives from academia, to improve the peer review process, to systematically collect data about the scholarly publication system,[4] to combat bias in scientific literature, and to increase the overall quality and efficiency of the scientific process.

History[edit]

In 1966, an early meta-research paper examined the statistical methods of 295 papers published in ten high-profile medical journals. It found that, «in almost 73% of the reports read … conclusions were drawn when the justification for these conclusions was invalid.»[6] In 2005, John Ioannidis published a paper titled «Why Most Published Research Findings Are False», which argued that a majority of papers in the medical field produce conclusions that are wrong.[5] The paper went on to become the most downloaded paper in the Public Library of Science[7][8] and is considered foundational to the field of metascience.[9] In a related study with Jeremy Howick and Despina Koletsi, Ioannidis showed that only a minority of medical interventions are supported by ‘high quality’ evidence according to The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. [10] Later meta-research identified widespread difficulty in replicating results in many scientific fields, including psychology and medicine. This problem was termed «the replication crisis». Metascience has grown as a reaction to the replication crisis and to concerns about waste in research.[11]

Many prominent publishers are interested in meta-research and in improving the quality of their publications. Top journals such as Science, The Lancet, and Nature, provide ongoing coverage of meta-research and problems with reproducibility.[12] In 2012 PLOS ONE launched a Reproducibility Initiative. In 2015 Biomed Central introduced a minimum-standards-of-reporting checklist to four titles.

The first international conference in the broad area of meta-research was the Research Waste/EQUATOR conference held in Edinburgh in 2015; the first international conference on peer review was the Peer Review Congress held in 1989.[13] In 2016, Research Integrity and Peer Review was launched. The journal’s opening editorial called for «research that will increase our understanding and suggest potential solutions to issues related to peer review, study reporting, and research and publication ethics».[14]

Fields and topics of meta-research[edit]

An exemplary visualization of a conception of scientific knowledge generation structured by layers, with the «Institution of Science» being the subject of metascience.

Metascience can be categorized into five major areas of interest: Methods, Reporting, Reproducibility, Evaluation, and Incentives. These correspond, respectively, with how to perform, communicate, verify, evaluate, and reward research.[1]

Methods[edit]

Metascience seeks to identify poor research practices, including biases in research, poor study design, abuse of statistics, and to find methods to reduce these practices.[1] Meta-research has identified numerous biases in scientific literature.[15] Of particular note is the widespread misuse of p-values and abuse of statistical significance.[16]

Scientific data science[edit]

Scientific data science is the use of data science to analyse research papers. It encompasses both qualitative and quantitative methods. Research in scientific data science includes fraud detection[17] and citation network analysis.[18]

Journalology[edit]

Journalology, also known as publication science, is the scholarly study of all aspects of the academic publishing process.[19][20] The field seeks to improve the quality of scholarly research by implementing evidence-based practices in academic publishing.[21] The term «journalology» was coined by Stephen Lock, the former editor-in-chief of The BMJ. The first Peer Review Congress, held in 1989 in Chicago, Illinois, is considered a pivotal moment in the founding of journalology as a distinct field.[21] The field of journalology has been influential in pushing for study pre-registration in science, particularly in clinical trials. Clinical-trial registration is now expected in most countries.[21]

Reporting[edit]

Meta-research has identified poor practices in reporting, explaining, disseminating and popularizing research, particularly within the social and health sciences. Poor reporting makes it difficult to accurately interpret the results of scientific studies, to replicate studies, and to identify biases and conflicts of interest in the authors. Solutions include the implementation of reporting standards, and greater transparency in scientific studies (including better requirements for disclosure of conflicts of interest). There is an attempt to standardize reporting of data and methodology through the creation of guidelines by reporting agencies such as CONSORT and the larger EQUATOR Network.[1]

Reproducibility[edit]

The replication crisis is an ongoing methodological crisis in which it has been found that many scientific studies are difficult or impossible to replicate.[22][23] While the crisis has its roots in the meta-research of the mid- to late-1900s, the phrase «replication crisis» was not coined until the early 2010s[24] as part of a growing awareness of the problem.[1] The replication crisis particularly affects psychology (especially social psychology) and medicine,[25][26] including cancer research.[27][28] Replication is an essential part of the scientific process, and the widespread failure of replication puts into question the reliability of affected fields.[29]

Moreover, replication of research (or failure to replicate) is considered less influential than original research, and is less likely to be published in many fields. This discourages the reporting of, and even attempts to replicate, studies.[30][31]

Evaluation and incentives[edit]

Metascience seeks to create a scientific foundation for peer review. Meta-research evaluates peer review systems including pre-publication peer review, post-publication peer review, and open peer review. It also seeks to develop better research funding criteria.[1]

Metascience seeks to promote better research through better incentive systems. This includes studying the accuracy, effectiveness, costs, and benefits of different approaches to ranking and evaluating research and those who perform it.[1] Critics argue that perverse incentives have created a publish-or-perish environment in academia which promotes the production of junk science, low quality research, and false positives.[32][33] According to Brian Nosek, «The problem that we face is that the incentive system is focused almost entirely on getting research published, rather than on getting research right.»[34] Proponents of reform seek to structure the incentive system to favor higher-quality results.[35] For example, by quality being judged on the basis of narrative expert evaluations («rather than [only or mainly] indices»), institutional evaluation criteria, guaranteeing of transparency, and professional standards.[36]

Contributorship

Studies proposed machine-readable standards and (a taxonomy of) badges for science publication management systems that hones in on contributorship – who has contributed what and how much of the research labor – rather that using traditional concept of plain authorship – who was involved in any way creation of a publication.[37][38][39][40] A study pointed out one of the problems associated with the ongoing neglect of contribution nuanciation – it found that «the number of publications has ceased to be a good metric as a result of longer author lists, shorter papers, and surging publication numbers».[41]

Assessment factors

Factors other than a submission’s merits can substantially influence peer reviewers’ evaluations.[42] Such factors may however also be important such as the use of track-records about the veracity of a researchers’ prior publications and its alignment with public interests. Nevertheless, evaluation systems – include those of peer-review – may substantially lack mechanisms and criteria that are oriented or well-performingly oriented towards merit, real-world positive impact, progress and public usefulness rather than analytical indicators such as number of citations or altmetrics even when such can be used as partial indicators of such ends.[43][44] Rethinking of the academic reward structure «to offer more formal recognition for intermediate products, such as data» could have positive impacts and reduce data withholding.[45]

Recognition of training

A commentary noted that academic rankings don’t consider where (country and institute) the respective researchers were trained.[46]

Scientometrics[edit]

Scientometrics concerns itself with measuring bibliographic data in scientific publications. Major research issues include the measurement of the impact of research papers and academic journals, the understanding of scientific citations, and the use of such measurements in policy and management contexts.[47] Studies suggest that «metrics used to measure academic success, such as the number of publications, citation number, and impact factor, have not changed for decades» and have to some degrees «ceased» to be good measures,[41] leading to issues such as «overproduction, unnecessary fragmentations, overselling, predatory journals (pay and publish), clever plagiarism, and deliberate obfuscation of scientific results so as to sell and oversell».[48]

Novel tools in this area include systems to quantify how much the cited-node informs the citing-node.[49] This can be used to convert unweighted citation networks to a weighted one and then for importance assessment, deriving «impact metrics for the various entities involved, like the publications, authors etc»[50] as well as, among other tools, for search engine- and recommendation systems.

Science governance[edit]

Science funding and science governance can also be explored and informed by metascience.[51]

Incentives[edit]

Various interventions such as prioritization can be important. For instance, the concept of differential technological development refers to deliberately developing technologies – e.g. control-, safety- and policy-technologies versus risky biotechnologies – at different precautionary paces to decrease risks, mainly global catastrophic risk, by influencing the sequence in which technologies are developed.[52][53] Relying only on the established form of legislation and incentives to ensure the right outcomes may not be adequate as these may often be too slow[54] or inappropriate.

Other incentives to govern science and related processes, including via metascience-based reforms, may include ensuring accountability to the public (in terms of e.g. accessibility of, especially publicly-funded, research or of it addressing various research topics of public interest in serious manners), increasing the qualified productive scientific workforce, improving the efficiency of science to improve problem-solving in general, and facilitating that unambiguous societal needs based on solid scientific evidence – such as about human physiology – are adequately prioritized and addressed. Such interventions, incentives and intervention-designs can be subjects of metascience.

Science funding and awards[edit]

Cluster network of scientific publications in relation to Nobel prizes.

Funding for climate research in the natural and technical sciences versus the social sciences and humanities[55]

Scientific awards are one category of science incentives. Metascience can explore existing and hypothetical systems of science awards. For instance, it found that work honored by Nobel prizes clusters in only a few scientific fields with only 36/71 having received at least one Nobel prize of the 114/849 domains science could be divided into according to their DC2 and DC3 classification systems. Five of the 114 domains were shown to make up over half of the Nobel prizes awarded 1995–2017 (particle physics [14%], cell biology [12.1%], atomic physics [10.9%], neuroscience [10.1%], molecular chemistry [5.3%]).[56][57]

A study found that delegation of responsibility by policy-makers – a centralized authority-based top-down approach – for knowledge production and appropriate funding to science with science subsequently somehow delivering «reliable and useful knowledge to society» is too simple.[51]

Measurements show that allocation of bio-medical resources can be more strongly correlated to previous allocations and research than to burden of diseases.[58]

A study suggests that «[i]f peer review is maintained as the primary mechanism of arbitration in the competitive selection of research reports and funding, then the scientific community needs to make sure it is not arbitrary».[42]

Studies indicate there to is a need to «reconsider how we measure success» (see #Factors of success and progress).[41]

Funding data

Funding information from grant databases and funding acknowledgment sections can be sources of data for scientometrics studies, e.g. for investigating or recognition of the impact of funding entities on the development of science and technology.[59]

Research questions and coordination[edit]

Scientists often communicate open research questions. Sometimes such questions are crowdsourced and/or aggregated, sometimes supplemented with priorities or other details. A common way open research questions are identified, communicated, established/confirmed and prioritized are their inclusion in scientific reviews of a sub-field or specific research question, including in systematic reviews and meta-analyses. Other channels include reports by science journalists and dedicated (sub-)websites such as 80000hours.org’s «research questions by discipline»[60] or the Wikipedia articles of the lists of unsolved problems,[61][62][63] aggregative/integrative studies,[61] as well as unsolved online posts on Q&A websites and forums, sometimes categorized/marked as unsolved.[64] There have been online surveys used to generate priority research topics which were then classified into broader themes.[65] Such may improve research relevance and value[66] or strengthen rationale for societal dedication of limited resources or expansions of the limited resources or for funding a specific study.[citation needed]

Risk governance[edit]

See also: § Differential R&D

Biosecurity requires the cooperation of scientists, technicians, policy makers, security engineers, and law enforcement officials.[67][68]

Philosopher Toby Ord, in his 2020 book The Precipice: Existential Risk and the Future of Humanity, puts into question whether the current international conventions regarding biotechnology research and development regulation, and self-regulation by biotechnology companies and the scientific community are adequate.[69][70]

In a paywalled article, American scientists proposed various policy-based measures to reduce the large risks from life sciences research – such as pandemics through accident or misapplication. Risk management measures may include novel international guidelines, effective oversight, improvement of US policies to influence policies globally, and identification of gaps in biosecurity policies along with potential approaches to address them.[71][72]

Science communication and public use[edit]

It has been argued that «science has two fundamental attributes that underpin its value as a global public good: that knowledge claims and the evidence on which they are based are made openly available to scrutiny, and that the results of scientific research are communicated promptly and efficiently».[73] Metascientific research is exploring topics of science communication such as media coverage of science, science journalism and online communication of results by science educators and scientists.[74][75][76][77] A study found that the «main incentive academics are offered for using social media is amplification» and that it should be «moving towards an institutional culture that focuses more on how these [or such] platforms can facilitate real engagement with research».[78] Science communication may also involve the communication of societal needs, concerns and requests to scientists.

Alternative metrics tools

Alternative metrics tools can be used not only for help in assessment (performance and impact)[58] and findability, but also aggregate many of the public discussions about a scientific paper in social media such as reddit, citations on Wikipedia, and reports about the study in the news media which can then in turn be analyzed in metascience or provided and used by related tools.[79] In terms of assessment and findability, altmetrics rate publications’ performance or impact by the interactions they receive through social media or other online platforms,[80] which can for example be used for sorting recent studies by measured impact, including before other studies are citing them. The specific procedures of established altmetrics are not transparent[80] and the used algorithms can not be customized or altered by the user as open source software can. A study has described various limitations of altmetrics and points «toward avenues for continued research and development».[81] They are also limited in their use as a primary tool for researchers to find received constructive feedback. (see above)

Societal implications and applications

It has been suggested that it may benefit science if «intellectual exchange—particularly regarding the societal implications and applications of science and technology—are better appreciated and incentivized in the future».[58]

Knowledge integration

Primary studies «without context, comparison or summary are ultimately of limited value» and various types[additional citation(s) needed] of research syntheses and summaries integrate primary studies.[82] Progress in key social-ecological challenges of the global environmental agenda is «hampered by a lack of integration and synthesis of existing scientific evidence», with a «fast-increasing volume of data», compartmentalized information and generally unmet evidence synthesis challenges.[83] According to Khalil, researchers are facing the problem of too many papers – e.g. in March 2014 more than 8,000 papers were submitted to arXiv – and to «keep up with the huge amount of literature, researchers use reference manager software, they make summaries and notes, and they rely on review papers to provide an overview of a particular topic». He notes that review papers are usually (only)» for topics in which many papers were written already, and they can get outdated quickly» and suggests «wiki-review papers» that get continuously updated with new studies on a topic and summarize many studies’ results and suggest future research.[84] A study suggests that if a scientific publication is being cited in a Wikipedia article this could potentially be considered as an indicator of some form of impact for this publication,[80] for example as this may, over time, indicate that the reference has contributed to a high-level of summary of the given topic.

Further information: § Knowledge integration and living documents

Science journalism

Science journalists play an important role in the scientific ecosystem and in science communication to the public and need to «know how to use, relevant information when deciding whether to trust a research finding, and whether and how to report on it», vetting the findings that get transmitted to the public.[85]

Science education[edit]

Some studies investigate science education, e.g. the teaching about selected scientific controversies[86] and historical discovery process of major scientific conclusions,[87] and common scientific misconceptions.[88] Education can also be a topic more generally such as how to improve the quality of scientific outputs and reduce the time needed before scientific work or how to enlarge and retain various scientific workforces.

Science misconceptions and anti-science attitudes[edit]

Many students have misconceptions about what science is and how it works.[89] Anti-science attitudes and beliefs are also a subject of research.[90][91] Hotez suggests antiscience «has emerged as a dominant and highly lethal force, and one that threatens global security», and that there is a need for «new infrastructure» that mitigates it.[92]

Evolution of sciences[edit]

Scientific practice[edit]

Number of authors of research articles in six journals through time[36]

Trends of diversity of work cited, mean number of self-citations, and mean age of cited work may indicate papers are using «narrower portions of existing knowledge».[93]

Metascience can investigate how scientific processes evolve over time. A study found that teams are growing in size, «increasing by an average of 17% per decade».[58] (see labor advantage below)

ArXiv’s yearly submission rate growth over 30 years.[94]

It was found that prevalent forms of non-open access publication and prices charged for many conventional journals – even for publicly funded papers – are unwarranted, unnecessary – or suboptimal – and detrimental barriers to scientific progress.[73][95][96][97] Open access can save considerable amounts of financial resources, which could be used otherwise, and level the playing field for researchers in developing countries.[98] There are substantial expenses for subscriptions, gaining access to specific studies, and for article processing charges. Paywall: The Business of Scholarship is a documentary on such issues.[99]

Another topic are the established styles of scientific communication (e.g. long text-form studies and reviews) and the scientific publishing practices – there are concerns about a «glacial pace» of conventional publishing.[100] The use of preprint-servers to publish study-drafts early is increasing and open peer review,[101] new tools to screen studies,[102] and improved matching of submitted manuscripts to reviewers[103] are among the proposals to speed up publication.

Science overall and intrafield developments[edit]

A visualization of scientific outputs by field in OpenAlex.[104]
A study can be part of multiple fields[clarification needed] and lower numbers of papers is not necessarily detrimental[48] for fields.

Change of number of scientific papers by field according to OpenAlex[104]

Number of PubMed search results for «coronavirus» by year from 1949 to 2020.

Studies have various kinds of metadata which can be utilized, complemented and made accessible in useful ways. OpenAlex is a free online index of over 200 million scientific documents that integrates and provides metadata such as sources, citations, author information, scientific fields and research topics. Its API and open source website can be used for metascience, scientometrics and novel tools that query this semantic web of papers.[105][106][107] Another project under development, Scholia, uses metadata of scientific publications for various visualizations and aggregation features such as providing a simple user interface summarizing literature about a specific feature of the SARS-CoV-2 virus using Wikidata’s «main subject» property.[108]

Subject-level resolutions

Beyond metadata explicitly assigned to studies by humans, natural language processing and AI can be used to assign research publications to topics – one study investigating the impact of science awards used such to associate a paper’s text (not just keywords) with the linguistic content of Wikipedia’s scientific topics pages («pages are created and updated by scientists and users through crowdsourcing»), creating meaningful and plausible classifications of high-fidelity scientific topics for further analysis or navigability.[109]

Further information: § Topic mapping

Growth or stagnation of science overall[edit]

Rough trend of scholarly publications about biomarkers according to Scholia; biomarker-related publications may not follow closely the number of viable biomarkers[110]

The CD index for papers published in Nature, PNAS, and Science and Nobel-Prize-winning papers[93]

The CD index may indicate a «decline of disruptive science and technology»[93]

Metascience research is investigating the growth of science overall, using e.g. data on the number of publications in bibliographic databases. A study found segments with different growth rates appear related to phases of «economic (e.g., industrialization)» – money is considered as necessary input to the science system – «and/or political developments (e.g., Second World War)». It also confirmed a recent exponential growth in the volume of scientific literature and calculated an average doubling period of 17.3 years.[111]

However, others have pointed out that is difficult to measure scientific progress in meaningful ways, partly because it’s hard to accurately evaluate how important any given scientific discovery is. A variety of perspectives of the trajectories of science overall (impact, number of major discoveries, etc) have been described in books and articles, including that science is becoming harder (per dollar or hour spent), that if science «slowing today, it is because science has remained too focused on established fields», that papers and patents are increasingly less likely to be «disruptive» in terms of breaking with the past as measured by the «CD index»,[93] and that there is a great stagnation – possibly as part of a larger trend[112] – whereby e.g. «things haven’t changed nearly as much since the 1970s» when excluding the computer and the Internet.

Better understanding of potential slowdowns according to some measures could be a major opportunity to improve humanity’s future.[113] For example, emphasis on citations in the measurement of scientific productivity, information overloads,[112] reliance on a narrower set of existing knowledge (which may include narrow specialization and related contemporary practices) ,[93] and risk-avoidant funding structures[114] may have «toward incremental science and away from exploratory projects that are more likely to fail».[115] The study that introduced the «CD index» suggests the overall number of papers has risen while the total of «highly disruptive» papers as measured by the index hasn’t (notably, the 1998 discovery of the accelerating expansion of the universe has a CD index of 0). Their results also suggest scientists and inventors «may be struggling to keep up with the pace of knowledge expansion».[116][93]

Various ways of measuring «novelty» of studies, novelty metrics,[115] have been proposed to balance a potential anti-novelty bias – such as textual analysis[115] or measuring whether it makes first-time-ever combinations of referenced journals, taking into account the difficulty.[117] Other approaches include pro-actively funding risky projects.[58] (see above)

Topic mapping[edit]

Science maps could show main interrelated topics within a certain scientific domain, their change over time, and their key actors (researchers, institutions, journals). They may help find factors determine the emergence of new scientific fields and the development of interdisciplinary areas and could be relevant for science policy purposes.[118] (see above) Theories of scientific change could guide «the exploration and interpretation of visualized intellectual structures and dynamic patterns».[119] The maps can show the intellectual, social or conceptual structure of a research field.[120] Beyond visual maps, expert survey-based studies and similar approaches could identify understudied or neglected societally important areas, topic-level problems (such as stigma or dogma), or potential misprioritizations.[additional citation(s) needed] Examples of such are studies about policy in relation to public health[121] and the social science of climate change mitigation where it has been estimated that only 0.12% of all funding for climate-related research is spent on such despite the most urgent puzzle at the current juncture being working out how to mitigate climate change, whereas the natural science of climate change is already well established.

There are also studies that map a scientific field or a topic such as the study of the use of research evidence in policy and practice, partly using surveys.[123]

Controversies, current debates and disagreement[edit]

See also: § scite.ai, and § Topic mapping

Percent of all citances in each field that contain signals of disagreement[124]

Some research is investigating scientific controversy or controveries, and may identify currently ongoing major debates (e.g. open questions), and disagreement between scientists or studies.[additional citation(s) needed] One study suggests the level of disagreement was highest in the social sciences and humanities (0.61%), followed by biomedical and health sciences (0.41%), life and earth sciences (0.29%); physical sciences and engineering (0.15%), and mathematics and computer science (0.06%).[124] Such research may also show, where the disagreements are, especially if they cluster, including visually such as with cluster diagrams.

Challenges of interpretation of pooled results[edit]

Studies about a specific research question or research topic are often reviewed in the form of higher-level overviews in which results from various studies are integrated, compared, critically analyzed and interpreted. Examples of such works are scientific reviews and meta-analyses. These and related practices face various challenges and are a subject of metascience.

A meta-analysis of several small studies does not always predict the results of a single large study.[125] Some have argued that a weakness of the method is that sources of bias are not controlled by the method: a good meta-analysis cannot correct for poor design or bias in the original studies.[126] This would mean that only methodologically sound studies should be included in a meta-analysis, a practice called ‘best evidence synthesis’.[126] Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size.[127] However, others have argued that a better approach is to preserve information about the variance in the study sample, casting as wide a net as possible, and that methodological selection criteria introduce unwanted subjectivity, defeating the purpose of the approach.[128]

Various issues with included or available studies such as, for example, heterogeneity of methods used may lead to faulty conclusions of the meta-analysis.[129]

Knowledge integration and living documents[edit]

Various problems require swift integration of new and existing science-based knowledge. Especially setting where there are a large number of loosely related projects and initiatives benefit from a common ground or «commons».[108]

Evidence synthesis can be applied to important and, notably, both relatively urgent and certain global challenges: «climate change, energy transitions, biodiversity loss, antimicrobial resistance, poverty eradication and so on». It was suggested that a better system would keep summaries of research evidence up to date via living systematic reviews – e.g. as living documents. While the number of scientific papers and data (or information and online knowledge) has risen substantially,[additional citation(s) needed] the number of published academic systematic reviews has risen from «around 6,000 in 2011 to more than 45,000 in 2021».[130] An evidence-based approach is important for progress in science, policy, medical and other practices. For example, meta-analyses can quantify what is known and identify what is not yet known[82] and place «truly innovative and highly interdisciplinary ideas» into the context of established knowledge which may enhance their impact.[58] (see above)

Factors of success and progress[edit]

See also: § Growth or stagnation of science overall

It has been hypothesized that a deeper understanding of factors behind successful science could «enhance prospects of science as a whole to more effectively address societal problems».[58]

Novel ideas and disruptive scholarship

Two metascientists reported that «structures fostering disruptive scholarship and focusing attention on novel ideas» could be important as in a growing scientific field citation flows disproportionately consolidate to already well-cited papers, possibly slowing and inhibiting canonical progress.[131][132] A study concluded that to enhance impact of truly innovative and highly interdisciplinary novel ideas, they should be placed in the context of established knowledge.[58]

Mentorship, partnerships and social factors

Other researchers reported that the most successful – in terms of «likelihood of prizewinning, National Academy of Science (NAS) induction, or superstardom» – protégés studied under mentors who published research for which they were conferred a prize after the protégés’ mentorship. Studying original topics rather than these mentors’ research-topics was also positively associated with success.[133][134] Highly productive partnerships are also a topic of research – e.g. «super-ties» of frequent co-authorship of two individuals who can complement skills, likely also the result of other factors such as mutual trust, conviction, commitment and fun.[135][58]

Study of successful scientists and processes, general skills and activities

The emergence or origin of ideas by successful scientists is also a topic of research, for example reviewing existing ideas on how Mendel made his discoveries,[136] – or more generally, the process of discovery by scientists. Science is a «multifaceted process of appropriation, copying, extending, or combining ideas and inventions» [and other types of knowledge or information], and not an isolated process.[58] There are also few studies investigating scientists’ habits, common modes of thinking, reading habits, use of information sources, digital literacy skills, and workflows.[137][138][139][140][141]

Labor advantage

A study theorized that in many disciplines, larger scientific productivity or success by elite universities can be explained by their larger pool of available funded laborers.[142][143][further explanation needed]

Ultimate impacts

Success (in science) is often measured in terms of metrics like citations, not in terms of the eventual or potential impact on lives and society, which awards (see above) sometimes do.[additional citation(s) needed] Problems with such metrics are roughly outlined elsewhere in this article and include that reviews replace citations to primary studies.[82] There are also proposals for changes to the academic incentives systems that increase the recognition of societal impact in the research process.[144]

Progress studies

A proposed field of «Progress Studies» could investigate how scientists (or funders or evaluators of scientists) should be acting, «figuring out interventions» and study progress itself.[145] The field was explicitly proposed in a 2019 essay and described as an applied science that prescribes action.[146]

As and for acceleration of progress

A study suggests that improving the way science is done could accelerate the rate of scientific discovery and its applications which could be useful for finding urgent solutions to humanity’s problems, improve humanity’s conditions, and enhance understanding of nature. Metascientific studies can seek to identify aspects of science that need improvement, and develop ways to improve them.[84] If science is accepted as the fundamental engine of economic growth and social progress, this could raise «the question of what we – as a society – can do to accelerate science, and to direct science toward solving society’s most important problems.»[147] However, one of the authors clarified that a one-size-fits-all approach is not thought to be right answer – for example, in funding, DARPA models, curiosity-driven methods, allowing «a single reviewer to champion a project even if his or her peers do not agree», and various other approaches all have their uses. Nevertheless, evaluation of them can help build knowledge of what works or works best.[114]

Reforms[edit]

Meta-research identifying flaws in scientific practice has inspired reforms in science. These reforms seek to address and fix problems in scientific practice which lead to low-quality or inefficient research.

A 2015 study lists «fragmented» efforts in meta-research.[1]

Pre-registration[edit]

The practice of registering a scientific study before it is conducted is called pre-registration. It arose as a means to address the replication crisis. Pregistration requires the submission of a registered report, which is then accepted for publication or rejected by a journal based on theoretical justification, experimental design, and the proposed statistical analysis. Pre-registration of studies serves to prevent publication bias (e.g. not publishing negative results), reduce data dredging, and increase replicability.[148][149]

Reporting standards[edit]

Studies showing poor consistency and quality of reporting have demonstrated the need for reporting standards and guidelines in science, which has led to the rise of organisations that produce such standards, such as CONSORT (Consolidated Standards of Reporting Trials) and the EQUATOR Network.

The EQUATOR (Enhancing the QUAlity and Transparency Of health Research)[150] Network is an international initiative aimed at promoting transparent and accurate reporting of health research studies to enhance the value and reliability of medical research literature.[151] The EQUATOR Network was established with the goals of raising awareness of the importance of good reporting of research, assisting in the development, dissemination and implementation of reporting guidelines for different types of study designs, monitoring the status of the quality of reporting of research studies in the health sciences literature, and conducting research relating to issues that impact the quality of reporting of health research studies.[152] The Network acts as an «umbrella» organisation, bringing together developers of reporting guidelines, medical journal editors and peer reviewers, research funding bodies, and other key stakeholders with a mutual interest in improving the quality of research publications and research itself.

Applications[edit]

The areas of application of metascience include ICTs, medicine, psychology and physics.

ICTs[edit]

Metascience is used in the creation and improvement of technical systems (ICTs) and standards of science evaluation, incentivation, communication, commissioning, funding, regulation, production, management, use and publication. Such can be called «applied metascience»[153][better source needed] and may seek to explore ways to increase quantity, quality and positive impact of research. One example for such is the development of alternative metrics.[58]

Study screening and feedback

Various websites or tools also identify inappropriate studies and/or enable feedback such as PubPeer, Cochrane’s Risk of Bias Tool[154] and RetractionWatch. Medical and academic disputes are as ancient as antiquity and a study calls for research into «constructive and obsessive criticism» and into policies to «help strengthen social media into a vibrant forum for discussion, and not merely an arena for gladiator matches».[155] Feedback to studies can be found via altmetrics which is often integrated at the website of the study – most often as an embedded Altmetrics badge – but may often be incomplete, such as only showing social media discussions that link to the study directly but not those that link to news reports about the study. (see above)

Tools used, modified, extended or investigated

Tools may get developed with metaresearch or can be used or investigated by such. Notable examples may include:

  • The tool scite.ai aims to track and link citations of papers as ‘Supporting’, ‘Mentioning’ or ‘Contrasting’ the study.[156][157][158]
  • The Scite Reference Check bot is an extension of scite.ai that scans new article PDFs «for references to retracted papers, and posts both the citing and retracted papers on Twitter» and also «flags when new studies cite older ones that have issued corrections, errata, withdrawals, or expressions of concern».[158] Studies have suggested as few as 4% of citations to retracted papers clearly recognize the retraction.[158]
  • Search engines like Google Scholar are used to find studies and the notification service Google Alerts enables notifications for new studies matching specified search terms. Scholarly communication infrastructure includes search databases.[159]
  • Shadow library Sci-hub is a topic of metascience[160]
  • Personal knowledge management systems for research-, knowledge- and task management, such as saving information in organized ways[161] with multi-document text editors for future use[162][163] Such systems could be described as part of, along with e.g. Web browser (tabs-addons[164] etc) and search software,[additional citation(s) needed] «mind-machine partnerships» that could be investigated by metascience for how they could improve science.[58]
  • Scholia – efforts to open scholarly publication metadata and use it via Wikidata.[165] (see above)
  • Various software enables common metascientific practices such as bibliometric analysis.[166]
Development

According to a study «a simple way to check how often studies have been repeated, and whether or not the original findings are confirmed» is needed due to reproducibility issues in science.[167][168] A study suggests a tool for screening studies for early warning signs for research fraud.[169]

Medicine[edit]

Clinical research in medicine is often of low quality, and many studies cannot be replicated.[170][171] An estimated 85% of research funding is wasted.[172] Additionally, the presence of bias affects research quality.[173] The pharmaceutical industry exerts substantial influence on the design and execution of medical research. Conflicts of interest are common among authors of medical literature[174] and among editors of medical journals. While almost all medical journals require their authors to disclose conflicts of interest, editors are not required to do so.[175] Financial conflicts of interest have been linked to higher rates of positive study results. In antidepressant trials, pharmaceutical sponsorship is the best predictor of trial outcome.[176]

Blinding is another focus of meta-research, as error caused by poor blinding is a source of experimental bias. Blinding is not well reported in medical literature, and widespread misunderstanding of the subject has resulted in poor implementation of blinding in clinical trials.[177] Furthermore, failure of blinding is rarely measured or reported.[178] Research showing the failure of blinding in antidepressant trials has led some scientists to argue that antidepressants are no better than placebo.[179][180] In light of meta-research showing failures of blinding, CONSORT standards recommend that all clinical trials assess and report the quality of blinding.[181]

Studies have shown that systematic reviews of existing research evidence are sub-optimally used in planning a new research or summarizing the results.[182] Cumulative meta-analyses of studies evaluating the effectiveness of medical interventions have shown that many clinical trials could have been avoided if a systematic review of existing evidence was done prior to conducting a new trial.[183][184][185] For example, Lau et al.[183] analyzed 33 clinical trials (involving 36974 patients) evaluating the effectiveness of intravenous streptokinase for acute myocardial infarction. Their cumulative meta-analysis demonstrated that 25 of 33 trials could have been avoided if a systematic review was conducted prior to conducting a new trial. In other words, randomizing 34542 patients was potentially unnecessary. One study[186] analyzed 1523 clinical trials included in 227 meta-analyses and concluded that «less than one quarter of relevant prior studies» were cited. They also confirmed earlier findings that most clinical trial reports do not present systematic review to justify the research or summarize the results.[186]

Many treatments used in modern medicine have been proven to be ineffective, or even harmful. A 2007 study by John Ioannidis found that it took an average of ten years for the medical community to stop referencing popular practices after their efficacy was unequivocally disproven.[187][188]

Psychology[edit]

Metascience has revealed significant problems in psychological research. The field suffers from high bias, low reproducibility, and widespread misuse of statistics.[189][190][191] The replication crisis affects psychology more strongly than any other field; as many as two-thirds of highly publicized findings may be impossible to replicate.[192] Meta-research finds that 80-95% of psychological studies support their initial hypotheses, which strongly implies the existence of publication bias.[193]

The replication crisis has led to renewed efforts to re-test important findings.[194][195] In response to concerns about publication bias and p-hacking, more than 140 psychology journals have adopted result-blind peer review, in which studies are pre-registered and published without regard for their outcome.[196] An analysis of these reforms estimated that 61 percent of result-blind studies produce null results, in contrast with 5 to 20 percent in earlier research. This analysis shows that result-blind peer review substantially reduces publication bias.[193]

Psychologists routinely confuse statistical significance with practical importance, enthusiastically reporting great certainty in unimportant facts.[197] Some psychologists have responded with an increased use of effect size statistics, rather than sole reliance on the p values.[citation needed]

Physics[edit]

Richard Feynman noted that estimates of physical constants were closer to published values than would be expected by chance. This was believed to be the result of confirmation bias: results that agreed with existing literature were more likely to be believed, and therefore published. Physicists now implement blinding to prevent this kind of bias.[198]

Organizations and institutes[edit]

There are several organizations and universities across the globe which work on meta-research – these include the Meta-Research Innovation Center at Berlin,[199] the Meta-Research Innovation Center at Stanford,[200][201] the Meta-Research Center at Tilburg University, the Meta-research & Evidence Synthesis Unit, The George Institute for Global Health at India and Center for Open Science. Organizations that develop tools for metascience include Our Research, Center for Scientific Integrity and altmetrics companies. There is an annual Metascience Conference.[202]

See also[edit]

  • Accelerating change
  • Citation analysis
  • Epistemology
  • Evidence-based practices
  • Evidence-based medicine
  • Evidence-based policy
  • Further research is needed
  • HARKing
  • Logology (science)
  • Metadata#In science
  • Metatheory
  • Open science
  • Philosophy of science
  • Sociology of scientific knowledge
  • Self-Organized Funding Allocation

References[edit]

  1. ^ a b c d e f g h Ioannidis, John P. A.; Fanelli, Daniele; Dunne, Debbie Drake; Goodman, Steven N. (2 October 2015). «Meta-research: Evaluation and Improvement of Research Methods and Practices». PLOS Biology. 13 (10): e1002264. doi:10.1371/journal.pbio.1002264. ISSN 1544-9173. PMC 4592065. PMID 26431313.
  2. ^ Bach, Becky (8 December 2015). «On communicating science and uncertainty: A podcast with John Ioannidis». Scope. Retrieved 20 May 2019.
  3. ^ Pashler, Harold; Harris, Christine R. (2012). «Is the Replicability Crisis Overblown? Three Arguments Examined». Perspectives on Psychological Science. 7 (6): 531–536. doi:10.1177/1745691612463401. ISSN 1745-6916. PMID 26168109. S2CID 1342421.
  4. ^ Nishikawa-Pacher, Andreas; Heck, Tamara; Schoch, Kerstin (4 October 2022). «Open Editors: A dataset of scholarly journals’ editorial board positions». Research Evaluation. doi:10.1093/reseval/rvac037. eISSN 1471-5449. ISSN 0958-2029.
  5. ^ a b Ioannidis, JP (August 2005). «Why most published research findings are false». PLOS Medicine. 2 (8): e124. doi:10.1371/journal.pmed.0020124. PMC 1182327. PMID 16060722.
  6. ^ Schor, Stanley (1966). «Statistical Evaluation of Medical Journal Manuscripts». JAMA: The Journal of the American Medical Association. 195 (13): 1123–1128. doi:10.1001/jama.1966.03100130097026. ISSN 0098-7484. PMID 5952081.
  7. ^ «Highly Cited Researchers». Retrieved September 17, 2015.
  8. ^ Medicine — Stanford Prevention Research Center. John P.A. Ioannidis
  9. ^ Robert Lee Hotz (September 14, 2007). «Most Science Studies Appear to Be Tainted By Sloppy Analysis». Wall Street Journal. Dow Jones & Company. Retrieved 2016-12-05.
  10. ^ Howick J, Koletsi D, Pandis N, Fleming PS, Loef M, Walach H, Schmidt S, Ioannidis JA. The quality of evidence for medical interventions does not improve or worsen: a metaepidemiological study of Cochrane reviews. Journal of Clinical Epidemiology 2020;126:154-159 [1]
  11. ^ «Researching the researchers». Nature Genetics. 46 (5): 417. 2014. doi:10.1038/ng.2972. ISSN 1061-4036. PMID 24769715.
  12. ^ Enserink, Martin (2018). «Research on research». Science. 361 (6408): 1178–1179. Bibcode:2018Sci…361.1178E. doi:10.1126/science.361.6408.1178. ISSN 0036-8075. PMID 30237336. S2CID 206626417.
  13. ^ Rennie, Drummond (1990). «Editorial Peer Review in Biomedical Publication». JAMA. 263 (10): 1317–1441. doi:10.1001/jama.1990.03440100011001. ISSN 0098-7484. PMID 2304208.
  14. ^ Harriman, Stephanie L.; Kowalczuk, Maria K.; Simera, Iveta; Wager, Elizabeth (2016). «A new forum for research on research integrity and peer review». Research Integrity and Peer Review. 1 (1): 5. doi:10.1186/s41073-016-0010-y. ISSN 2058-8615. PMC 5794038. PMID 29451544.
  15. ^ Fanelli, Daniele; Costas, Rodrigo; Ioannidis, John P. A. (2017). «Meta-assessment of bias in science». Proceedings of the National Academy of Sciences of the United States of America. 114 (14): 3714–3719. Bibcode:2017PNAS..114.3714F. doi:10.1073/pnas.1618569114. ISSN 1091-6490. PMC 5389310. PMID 28320937.
  16. ^ Check Hayden, Erika (2013). «Weak statistical standards implicated in scientific irreproducibility». Nature. doi:10.1038/nature.2013.14131. S2CID 211729036. Retrieved 9 May 2019.
  17. ^ Markowitz, David M.; Hancock, Jeffrey T. (2016). «Linguistic obfuscation in fraudulent science». Journal of Language and Social Psychology. 35 (4): 435–445. doi:10.1177/0261927X15614605. S2CID 146174471.
  18. ^ Ding, Y. (2010). «Applying weighted PageRank to author citation networks». Journal of the American Society for Information Science and Technology. 62 (2): 236–245. arXiv:1102.1760. doi:10.1002/asi.21452. S2CID 3752804.
  19. ^ Galipeau, James; Moher, David; Campbell, Craig; Hendry, Paul; Cameron, D. William; Palepu, Anita; Hébert, Paul C. (March 2015). «A systematic review highlights a knowledge gap regarding the effectiveness of health-related training programs in journalology». Journal of Clinical Epidemiology. 68 (3): 257–265. doi:10.1016/j.jclinepi.2014.09.024. PMID 25510373.
  20. ^ Wilson, Mitch; Moher, David (March 2019). «The Changing Landscape of Journalology in Medicine». Seminars in Nuclear Medicine. 49 (2): 105–114. doi:10.1053/j.semnuclmed.2018.11.009. hdl:10393/38493. PMID 30819390. S2CID 73471103.
  21. ^ a b c Couzin-Frankel, Jennifer (18 September 2018). «‘Journalologists’ use scientific methods to study academic publishing. Is their work improving science?». Science. doi:10.1126/science.aav4758. S2CID 115360831.
  22. ^ Schooler, J. W. (2014). «Metascience could rescue the ‘replication crisis’«. Nature. 515 (7525): 9. Bibcode:2014Natur.515….9S. doi:10.1038/515009a. PMID 25373639.
  23. ^ Smith, Noah (2 November 2017). «Why ‘Statistical Significance’ Is Often Insignificant». Bloomberg.com. Retrieved 7 November 2017.
  24. ^ Pashler, Harold; Wagenmakers, Eric Jan (2012). «Editors’ Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence?». Perspectives on Psychological Science. 7 (6): 528–530. doi:10.1177/1745691612465253. PMID 26168108. S2CID 26361121.
  25. ^ Gary Marcus (May 1, 2013). «The Crisis in Social Psychology That Isn’t». The New Yorker.
  26. ^ Jonah Lehrer (December 13, 2010). «The Truth Wears Off». The New Yorker.
  27. ^ «Dozens of major cancer studies can’t be replicated». Science News. 7 December 2021. Retrieved 19 January 2022.
  28. ^ «Reproducibility Project: Cancer Biology». www.cos.io. Center for Open Science. Retrieved 19 January 2022.
  29. ^ Staddon, John (2017) Scientific Method: How science works, fails to work or pretends to work. Taylor and Francis.
  30. ^ Yeung, Andy W. K. (2017). «Do Neuroscience Journals Accept Replications? A Survey of Literature». Frontiers in Human Neuroscience. 11: 468. doi:10.3389/fnhum.2017.00468. ISSN 1662-5161. PMC 5611708. PMID 28979201.
  31. ^ Martin, G. N.; Clarke, Richard M. (2017). «Are Psychology Journals Anti-replication? A Snapshot of Editorial Practices». Frontiers in Psychology. 8: 523. doi:10.3389/fpsyg.2017.00523. ISSN 1664-1078. PMC 5387793. PMID 28443044.
  32. ^ Binswanger, Mathias (2015). «How Nonsense Became Excellence: Forcing Professors to Publish». In Welpe, Isabell M.; Wollersheim, Jutta; Ringelhan, Stefanie; Osterloh, Margit (eds.). Incentives and Performance. Incentives and Performance: Governance of Research Organizations. Springer International Publishing. pp. 19–32. doi:10.1007/978-3-319-09785-5_2. ISBN 978-3319097855. S2CID 110698382.
  33. ^ Edwards, Marc A.; Roy, Siddhartha (2016-09-22). «Academic Research in the 21st Century: Maintaining Scientific Integrity in a Climate of Perverse Incentives and Hypercompetition». Environmental Engineering Science. 34 (1): 51–61. doi:10.1089/ees.2016.0223. PMC 5206685. PMID 28115824.
  34. ^ Brookshire, Bethany (21 October 2016). «Blame bad incentives for bad science». Science News. Retrieved 11 July 2019.
  35. ^ Smaldino, Paul E.; McElreath, Richard (2016). «The natural selection of bad science». Royal Society Open Science. 3 (9): 160384. arXiv:1605.09511. Bibcode:2016RSOS….360384S. doi:10.1098/rsos.160384. PMC 5043322. PMID 27703703.
  36. ^ a b Chapman, Colin A.; Bicca-Marques, Júlio César; Calvignac-Spencer, Sébastien; Fan, Pengfei; Fashing, Peter J.; Gogarten, Jan; Guo, Songtao; Hemingway, Claire A.; Leendertz, Fabian; Li, Baoguo; Matsuda, Ikki; Hou, Rong; Serio-Silva, Juan Carlos; Chr. Stenseth, Nils (4 December 2019). «Games academics play and their consequences: how authorship, h -index and journal impact factors are shaping the future of academia». Proceedings of the Royal Society B: Biological Sciences. 286 (1916): 20192047. doi:10.1098/rspb.2019.2047. ISSN 0962-8452.
  37. ^ Holcombe, Alex O. (September 2019). «Contributorship, Not Authorship: Use CRediT to Indicate Who Did What». Publications. 7 (3): 48. doi:10.3390/publications7030048.
  38. ^ McNutt, Marcia K.; Bradford, Monica; Drazen, Jeffrey M.; Hanson, Brooks; Howard, Bob; Jamieson, Kathleen Hall; Kiermer, Véronique; Marcus, Emilie; Pope, Barbara Kline; Schekman, Randy; Swaminathan, Sowmya; Stang, Peter J.; Verma, Inder M. (13 March 2018). «Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication». Proceedings of the National Academy of Sciences. 115 (11): 2557–2560. Bibcode:2018PNAS..115.2557M. doi:10.1073/pnas.1715374115. ISSN 0027-8424. PMC 5856527. PMID 29487213.
  39. ^ Brand, Amy; Allen, Liz; Altman, Micah; Hlava, Marjorie; Scott, Jo (1 April 2015). «Beyond authorship: attribution, contribution, collaboration, and credit». Learned Publishing. 28 (2): 151–155. doi:10.1087/20150211. S2CID 45167271.
  40. ^ Singh Chawla, Dalmeet (October 2015). «Digital badges aim to clear up politics of authorship». Nature. 526 (7571): 145–146. Bibcode:2015Natur.526..145S. doi:10.1038/526145a. ISSN 1476-4687. PMID 26432249. S2CID 256770827.
  41. ^ a b c Fire, Michael; Guestrin, Carlos (1 June 2019). «Over-optimization of academic publishing metrics: observing Goodhart’s Law in action». GigaScience. 8 (6): giz053. doi:10.1093/gigascience/giz053. PMC 6541803. PMID 31144712.
  42. ^ a b Elson, Malte; Huff, Markus; Utz, Sonja (1 March 2020). «Metascience on Peer Review: Testing the Effects of a Study’s Originality and Statistical Significance in a Field Experiment». Advances in Methods and Practices in Psychological Science. 3 (1): 53–65. doi:10.1177/2515245919895419. ISSN 2515-2459. S2CID 212778011.
  43. ^ McLean, Robert K D; Sen, Kunal (1 April 2019). «Making a difference in the real world? A meta-analysis of the quality of use-oriented research using the Research Quality Plus approach». Research Evaluation. 28 (2): 123–135. doi:10.1093/reseval/rvy026.
  44. ^ «Bringing Rigor to Relevant Questions: How Social Science Research Can Improve Youth Outcomes in the Real World» (PDF). Retrieved 22 November 2021.
  45. ^ Fecher, Benedikt; Friesike, Sascha; Hebing, Marcel; Linek, Stephanie (20 June 2017). «A reputation economy: how individual reward considerations trump systemic arguments for open access to data». Palgrave Communications. 3 (1): 1–10. doi:10.1057/palcomms.2017.51. ISSN 2055-1045.
  46. ^ La Porta, Caterina AM; Zapperi, Stefano (1 December 2022). «America’s top universities reap the benefit of Italian-trained scientists». Nature Italy. doi:10.1038/d43978-022-00163-5. S2CID 254331807. Retrieved 18 December 2022.
  47. ^ Leydesdorff, L. and Milojevic, S., «Scientometrics» arXiv:1208.4566 (2013), forthcoming in: Lynch, M. (editor), International Encyclopedia of Social and Behavioral Sciences subsection 85030. (2015)
  48. ^ a b Singh, Navinder (8 October 2021). «Plea to publish less». arXiv:2201.07985 [physics.soc-ph].
  49. ^ Manchanda, Saurav; Karypis, George (November 2021). «Evaluating Scholarly Impact: Towards Content-Aware Bibliometrics». Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics: 6041–6053. doi:10.18653/v1/2021.emnlp-main.488. S2CID 243865632.
  50. ^ Manchanda, Saurav; Karypis, George. «Importance Assessment in Scholarly Networks» (PDF).
  51. ^ a b Nielsen, Kristian H. (1 March 2021). «Science and public policy». Metascience. 30 (1): 79–81. doi:10.1007/s11016-020-00581-5. ISSN 1467-9981. PMC 7605730. S2CID 226237994.
  52. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. pp. 229–237. ISBN 978-0199678112.
  53. ^ Ord, Toby (2020). The Precipice: Existential Risk and the Future of Humanity. United Kingdom: Bloomsbury Publishing. p. 200. ISBN 978-1526600219.
  54. ^ «Technology is changing faster than regulators can keep up — here’s how to close the gap». World Economic Forum. Retrieved 27 January 2022.
  55. ^ Overland, Indra; Sovacool, Benjamin K. (1 April 2020). «The misallocation of climate research funding». Energy Research & Social Science. 62: 101349. doi:10.1016/j.erss.2019.101349. ISSN 2214-6296. S2CID 212789228.
  56. ^ «Nobel prize-winning work is concentrated in minority of scientific fields». phys.org. Retrieved 17 August 2020.
  57. ^ Ioannidis, John P. A.; Cristea, Ioana-Alina; Boyack, Kevin W. (29 July 2020). «Work honored by Nobel prizes clusters heavily in a few scientific fields». PLOS ONE. 15 (7): e0234612. Bibcode:2020PLoSO..1534612I. doi:10.1371/journal.pone.0234612. ISSN 1932-6203. PMC 7390258. PMID 32726312.
  58. ^ a b c d e f g h i j k l Fortunato, Santo; Bergstrom, Carl T.; Börner, Katy; Evans, James A.; Helbing, Dirk; Milojević, Staša; Petersen, Alexander M.; Radicchi, Filippo; Sinatra, Roberta; Uzzi, Brian; Vespignani, Alessandro; Waltman, Ludo; Wang, Dashun; Barabási, Albert-László (2 March 2018). «Science of science». Science. 359 (6379): eaao0185. doi:10.1126/science.aao0185. PMC 5949209. PMID 29496846. Retrieved 22 November 2021.
  59. ^ Fajardo-Ortiz, David; Hornbostel, Stefan; Montenegro de Wit, Maywa; Shattuck, Annie (22 June 2022). «Funding CRISPR: Understanding the role of government and philanthropic institutions in supporting academic research within the CRISPR innovation system». Quantitative Science Studies. 3 (2): 443–456. doi:10.1162/qss_a_00187. S2CID 235266330.
  60. ^ «Research questions that could have a big social impact, organised by discipline». 80,000 Hours. Retrieved 31 August 2022.
  61. ^ a b Coley, Alan A (30 August 2017). «Open problems in mathematical physics». Physica Scripta. 92 (9): 093003. arXiv:1710.02105. Bibcode:2017PhyS…92i3003C. doi:10.1088/1402-4896/aa83c1. ISSN 0031-8949. S2CID 3892374.
  62. ^ Adolphs, Ralph (1 April 2015). «The unsolved problems of neuroscience». Trends in Cognitive Sciences. 19 (4): 173–175. doi:10.1016/j.tics.2015.01.007. ISSN 1364-6613. PMC 4574630. PMID 25703689. As for Hilbert’s problems, there is a Wikipedia entry for ‘unsolved problems in neuroscience’; there are more popular writings; and there are books. In trying to brainstorm a list of my own, I read the above sources and asked around. This yields a predictable list ranging from ‘how can we cure psychiatric illness?’ to ‘what is consciousness?’ (Box 1). Asking Caltech faculty added entries about how networks function and what neural computation is. Caltech students had things figured out and got straight to the point (‘how can I sleep less?’, ‘how can we save our species?’, ‘can we become immortal?’).
  63. ^ Dev, Sukhendu B. (1 March 2015). «Unsolved problems in biology—The state of current thinking». Progress in Biophysics and Molecular Biology. 117 (2): 232–239. doi:10.1016/j.pbiomolbio.2015.02.001. ISSN 0079-6107. PMID 25687284. Among many of the responses I received, a large majority mentioned several aspects of neuroscience. This is not surprising since the brain remains the most uncharted area in humans. A list of unsolved problems in neuroscience can be found in http://en.wikipedia.org/wiki/List_of_unsolved_problems_in_neuroscience (Accessed January 12, 2015).
  64. ^ Cartaxo, Bruno; Pinto, Gustavo; Ribeiro, Danilo; Kamei, Fernando; Santos, Ronnie E.S.; da Silva, Fábio Q.B.; Soares, Sérgio (May 2017). «Using Q&A Websites as a Method for Assessing Systematic Reviews». 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR): 238–242. doi:10.1109/MSR.2017.5. ISBN 978-1-5386-1544-7. S2CID 5853766.
  65. ^ Synnot, Anneliese; Bragge, Peter; Lowe, Dianne; Nunn, Jack S; O’Sullivan, Molly; Horvat, Lidia; Tong, Allison; Kay, Debra; Ghersi, Davina; McDonald, Steve; Poole, Naomi; Bourke, Noni; Lannin, Natasha; Vadasz, Danny; Oliver, Sandy; Carey, Karen; Hill, Sophie J (May 2018). «Research priorities in health communication and participation: international survey of consumers and other stakeholders». BMJ Open. 8 (5): e019481. doi:10.1136/bmjopen-2017-019481. PMC 5942413. PMID 29739780.
  66. ^ Synnot, Anneliese J.; Tong, Allison; Bragge, Peter; Lowe, Dianne; Nunn, Jack S.; O’Sullivan, Molly; Horvat, Lidia; Kay, Debra; Ghersi, Davina; McDonald, Steve; Poole, Naomi; Bourke, Noni; Lannin, Natasha A.; Vadasz, Danny; Oliver, Sandy; Carey, Karen; Hill, Sophie J. (29 April 2019). «Selecting, refining and identifying priority Cochrane Reviews in health communication and participation in partnership with consumers and other stakeholders». Health Research Policy and Systems. 17 (1): 45. doi:10.1186/s12961-019-0444-z. PMC 6489310. PMID 31036016.
  67. ^ Salerno, Reynolds M.; Gaudioso, Jennifer; Brodsky, Benjamin H. (2007). «Preface». Laboratory Biosecurity Handbook (Illustrated ed.). CRC Press. p. xi. ISBN 9781420006209. Retrieved 23 May 2020.
  68. ^ Piper, Kelsey (2022-04-05). «Why experts are terrified of a human-made pandemic — and what we can do to stop it». Vox. Retrieved 2022-04-08.
  69. ^ Ord, Toby (2020-03-06). «Why we need worst-case thinking to prevent pandemics». The Guardian. ISSN 0261-3077. Retrieved 2020-04-11. This is an edited extract from The Precipice: Existential Risk and the Future of Humanity
  70. ^ Ord, Toby (2021-03-23). «Covid-19 has shown humanity how close we are to the edge». The Guardian. ISSN 0261-3077. Retrieved 2021-03-26.{{cite news}}: CS1 maint: url-status (link)
  71. ^ «Forschung an Krankheitserregern soll sicherer werden». www.sciencemediacenter.de. Retrieved 17 January 2023.
  72. ^ Pannu, Jaspreet; Palmer, Megan J.; Cicero, Anita; Relman, David A.; Lipsitch, Marc; Inglesby, Tom (16 December 2022). «Strengthen oversight of risky research on pathogens». Science. 378 (6625): 1170–1172. Bibcode:2022Sci…378.1170P. doi:10.1126/science.adf6020. ISSN 0036-8075. PMID 36480598. S2CID 254998228.
    • University press release: «Stanford Researchers Recommend Stronger Oversight of Risky Research on Pathogens». Stanford University. Retrieved 17 January 2023.

  73. ^ a b «Science as a Global Public Good». International Science Council. 8 October 2021. Retrieved 22 November 2021.
  74. ^ Jamieson, Kathleen Hall; Kahan, Dan; Scheufele, Dietram A. (17 May 2017). The Oxford Handbook of the Science of Science Communication. Oxford University Press. ISBN 978-0190497637.
  75. ^ Grochala, Rafał (16 December 2019). «Science communication in online media: influence of press releases on coverage of genetics and CRISPR». doi:10.1101/2019.12.13.875278. S2CID 213125031.
  76. ^ «FRAMING ANALYSIS OF NEWS COVERAGE ON RENEWABLE ENERGYIN THE STAR ONLINE NEWS PORTAL» (PDF). Retrieved 22 November 2021.
  77. ^ MacLaughlin, Ansel; Wihbey, John; Smith, David (15 June 2018). «Predicting News Coverage of Scientific Articles». Proceedings of the International AAAI Conference on Web and Social Media. 12 (1). doi:10.1609/icwsm.v12i1.14999. ISSN 2334-0770. S2CID 49412893.
  78. ^ Carrigan, Mark; Jordan, Katy (4 November 2021). «Platforms and Institutions in the Post-Pandemic University: a Case Study of Social Media and the Impact Agenda». Postdigital Science and Education. 4 (2): 354–372. doi:10.1007/s42438-021-00269-x. ISSN 2524-4868. S2CID 243760357.
  79. ^ Baykoucheva, Svetla (2015). «Measuring attention». Managing Scientific Information and Research Data: 127–136. doi:10.1016/B978-0-08-100195-0.00014-7. ISBN 978-0081001950.
  80. ^ a b c Zagorova, Olga; Ulloa, Roberto; Weller, Katrin; Flöck, Fabian (12 April 2022). ««I updated the <ref>»: The evolution of references in the English Wikipedia and the implications for altmetrics». Quantitative Science Studies. 3 (1): 147–173. doi:10.1162/qss_a_00171. S2CID 222177064.
  81. ^ Williams, Ann E. (12 June 2017). «Altmetrics: an overview and evaluation». Online Information Review. 41 (3): 311–317. doi:10.1108/OIR-10-2016-0294.
  82. ^ a b c Gurevitch, Jessica; Koricheva, Julia; Nakagawa, Shinichi; Stewart, Gavin (March 2018). «Meta-analysis and the science of research synthesis». Nature. 555 (7695): 175–182. Bibcode:2018Natur.555..175G. doi:10.1038/nature25753. ISSN 1476-4687. PMID 29517004. S2CID 3761687.
  83. ^ Balbi, Stefano; Bagstad, Kenneth J.; Magrach, Ainhoa; Sanz, Maria Jose; Aguilar-Amuchastegui, Naikoa; Giupponi, Carlo; Villa, Ferdinando (17 February 2022). «The global environmental agenda urgently needs a semantic web of knowledge». Environmental Evidence. 11 (1): 5. doi:10.1186/s13750-022-00258-y. ISSN 2047-2382. S2CID 246872765.
  84. ^ a b Khalil, Mohammed M. (2016). «Improving Science for a Better Future». How Should Humanity Steer the Future?. The Frontiers Collection. Springer International Publishing: 113–126. doi:10.1007/978-3-319-20717-9_11. ISBN 978-3-319-20716-2.
  85. ^ «How Do Science Journalists Evaluate Psychology Research?». psyarxiv.com.
  86. ^ Dunlop, Lynda; Veneu, Fernanda (1 September 2019). «Controversies in Science». Science & Education. 28 (6): 689–710. doi:10.1007/s11191-019-00048-y. ISSN 1573-1901. S2CID 255016078.
  87. ^ Norsen, Travis (2016). «Back to the Future: Crowdsourcing Innovation by Refocusing Science Education». How Should Humanity Steer the Future?. The Frontiers Collection: 85–95. doi:10.1007/978-3-319-20717-9_9. ISBN 978-3-319-20716-2.
  88. ^ Bschir, Karim (July 2021). «How to make sense of science: Mano Singham: The great paradox of science: why its conclusions can be relied upon even though they cannot be proven. Oxford: Oxford University Press, 2019, 332 pp, £ 22.99 HB». Metascience. 30 (2): 327–330. doi:10.1007/s11016-021-00654-z. S2CID 254792908.
  89. ^ «Correcting misconceptions — Understanding Science». 21 April 2022. Retrieved 25 January 2023.
  90. ^ Philipp-Muller, Aviva; Lee, Spike W. S.; Petty, Richard E. (26 July 2022). «Why are people antiscience, and what can we do about it?». Proceedings of the National Academy of Sciences. 119 (30): e2120755119. Bibcode:2022PNAS..11920755P. doi:10.1073/pnas.2120755119. ISSN 0027-8424. PMC 9335320. PMID 35858405.
  91. ^ «The 4 bases of anti-science beliefs – and what to do about them». SCIENMAG: Latest Science and Health News. 11 July 2022. Retrieved 25 January 2023.
  92. ^ Hotez, Peter J. «The Antiscience Movement Is Escalating, Going Global and Killing Thousands». Scientific American. Retrieved 25 January 2023.
  93. ^ a b c d e f Park, Michael; Leahey, Erin; Funk, Russell J. (January 2023). «Papers and patents are becoming less disruptive over time». Nature. 613 (7942): 138–144. Bibcode:2023Natur.613..138P. doi:10.1038/s41586-022-05543-x. ISSN 1476-4687. PMID 36600070. S2CID 255466666.
  94. ^ Ginsparg, Paul (September 2021). «Lessons from arXiv’s 30 years of information sharing». Nature Reviews Physics. 3 (9): 602–603. doi:10.1038/s42254-021-00360-z. PMC 8335983. PMID 34377944.
  95. ^ «Nature Journals To Charge Authors Hefty Fee To Make Scientific Papers Open Access». IFLScience. Retrieved 22 November 2021.
  96. ^ «Harvard University says it can’t afford journal publishers’ prices». The Guardian. 24 April 2012. Retrieved 22 November 2021.
  97. ^ Van Noorden, Richard (1 March 2013). «Open access: The true cost of science publishing». Nature. 495 (7442): 426–429. Bibcode:2013Natur.495..426V. doi:10.1038/495426a. ISSN 1476-4687. PMID 23538808. S2CID 27021567.
  98. ^ Tennant, Jonathan P.; Waldner, François; Jacques, Damien C.; Masuzzo, Paola; Collister, Lauren B.; Hartgerink, Chris. H. J. (21 September 2016). «The academic, economic and societal impacts of Open Access: an evidence-based review». F1000Research. 5: 632. doi:10.12688/f1000research.8460.3. PMC 4837983. PMID 27158456.
  99. ^ «Paywall: The business of scholarship review – analysis of a scandal». New Scientist. Retrieved 28 January 2023.
  100. ^ Powell, Kendall (1 February 2016). «Does it take too long to publish research?». Nature. 530 (7589): 148–151. doi:10.1038/530148a. PMID 26863966. S2CID 1013588. Retrieved 28 January 2023.
  101. ^ «Open peer review: bringing transparency, accountability, and inclusivity to the peer review process». Impact of Social Sciences. 13 September 2017. Retrieved 28 January 2023.
  102. ^ Dattani, Saloni. «The Pandemic Uncovered Ways to Speed Up Science». Wired. Retrieved 28 January 2023.
  103. ^ «Speeding up the publication process at PLOS ONE». EveryONE. 13 May 2019. Retrieved 28 January 2023.
  104. ^ a b «Open Alex Data Evolution». observablehq.com. 8 February 2022. Retrieved 18 February 2022.
  105. ^ Singh Chawla, Dalmeet (24 January 2022). «Massive open index of scholarly papers launches». Nature. doi:10.1038/d41586-022-00138-y. Retrieved 14 February 2022.
  106. ^ «OpenAlex: The Promising Alternative to Microsoft Academic Graph». Singapore Management University (SMU). Retrieved 14 February 2022.
  107. ^ «OpenAlex Documentation». Retrieved 18 February 2022.
  108. ^ a b Waagmeester, Andra; Willighagen, Egon L.; Su, Andrew I.; Kutmon, Martina; Gayo, Jose Emilio Labra; Fernández-Álvarez, Daniel; Groom, Quentin; Schaap, Peter J.; Verhagen, Lisa M.; Koehorst, Jasper J. (22 January 2021). «A protocol for adding knowledge to Wikidata: aligning resources on human coronaviruses». BMC Biology. 19 (1): 12. doi:10.1186/s12915-020-00940-y. ISSN 1741-7007. PMC 7820539. PMID 33482803.
  109. ^ Jin, Ching; Ma, Yifang; Uzzi, Brian (5 October 2021). «Scientific prizes and the extraordinary growth of scientific topics». Nature Communications. 12 (1): 5619. arXiv:2012.09269. Bibcode:2021NatCo..12.5619J. doi:10.1038/s41467-021-25712-2. ISSN 2041-1723. PMC 8492701. PMID 34611161.
  110. ^ «Scholia – biomarker». Retrieved 28 January 2023.
  111. ^ Bornmann, Lutz; Haunschild, Robin; Mutz, Rüdiger (7 October 2021). «Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases». Humanities and Social Sciences Communications. 8 (1): 1–15. doi:10.1057/s41599-021-00903-w. ISSN 2662-9992. S2CID 229156128.
  112. ^ a b Thompson, Derek (1 December 2021). «America Is Running on Fumes». The Atlantic. Retrieved 27 January 2023.
  113. ^ Collison, Patrick; Nielsen, Michael (16 November 2018). «Science Is Getting Less Bang for Its Buck». The Atlantic. Retrieved 27 January 2023.
  114. ^ a b «How to escape scientific stagnation». The Economist. Retrieved 25 January 2023.
  115. ^ a b c Bhattacharya, Jay; Packalen, Mikko (February 2020). «Stagnation and Scientific Incentives» (PDF). National Bureau of Economic Research.
  116. ^ Tejada, Patricia Contreras (13 January 2023). «With fewer disruptive studies, is science becoming an echo chamber?». Advanced Science News. Archived from the original on 15 February 2023. Retrieved 15 February 2023.
  117. ^
  118. ^ Petrovich, Eugenio (2020). «Science mapping». www.isko.org. Retrieved 27 January 2023.
  119. ^ Chen, Chaomei (21 March 2017). «Science Mapping: A Systematic Review of the Literature». Journal of Data and Information Science. 2 (2): 1–40. doi:10.1515/jdis-2017-0006. S2CID 57737772.
  120. ^ Gutiérrez-Salcedo, M.; Martínez, M. Ángeles; Moral-Munoz, J. A.; Herrera-Viedma, E.; Cobo, M. J. (1 May 2018). «Some bibliometric procedures for analyzing and evaluating research fields». Applied Intelligence. 48 (5): 1275–1287. doi:10.1007/s10489-017-1105-y. ISSN 1573-7497. S2CID 254227914.
  121. ^ Navarro, V. (31 March 2008). «Politics and health: a neglected area of research». The European Journal of Public Health. 18 (4): 354–355. doi:10.1093/eurpub/ckn040. PMID 18524802.
  122. ^ Farley-Ripple, Elizabeth N.; Oliver, Kathryn; Boaz, Annette (7 September 2020). «Mapping the community: use of research evidence in policy and practice». Humanities and Social Sciences Communications. 7 (1): 1–10. doi:10.1057/s41599-020-00571-2. ISSN 2662-9992.
  123. ^ a b Lamers, Wout S; Boyack, Kevin; Larivière, Vincent; Sugimoto, Cassidy R; van Eck, Nees Jan; Waltman, Ludo; Murray, Dakota (24 December 2021). «Investigating disagreement in the scientific literature». eLife. 10: e72737. doi:10.7554/eLife.72737. ISSN 2050-084X. PMC 8709576. PMID 34951588.
  124. ^ LeLorier J, Grégoire G, Benhaddad A, Lapierre J, Derderian F (August 1997). «Discrepancies between meta-analyses and subsequent large randomized, controlled trials». The New England Journal of Medicine. 337 (8): 536–542. doi:10.1056/NEJM199708213370806. PMID 9262498.
  125. ^ a b Slavin RE (1986). «Best-Evidence Synthesis: An Alternative to Meta-Analytic and Traditional Reviews». Educational Researcher. 15 (9): 5–9. doi:10.3102/0013189X015009005. S2CID 146457142.
  126. ^ Hunter JE, Schmidt FL, Jackson GB, et al. (American Psychological Association. Division of Industrial-Organizational Psychology) (1982). Meta-analysis: cumulating research findings across studies. Beverly Hills, California: Sage. ISBN 978-0-8039-1864-1.
  127. ^ Glass GV, McGaw B, Smith ML (1981). Meta-analysis in social research. Beverly Hills, California: Sage Publications. ISBN 978-0-8039-1633-3.
  128. ^ Stone, Dianna L.; Rosopa, Patrick J. (1 March 2017). «The Advantages and Limitations of Using Meta-analysis in Human Resource Management Research». Human Resource Management Review. 27 (1): 1–7. doi:10.1016/j.hrmr.2016.09.001. ISSN 1053-4822.
  129. ^ Elliott, Julian; Lawrence, Rebecca; Minx, Jan C.; Oladapo, Olufemi T.; Ravaud, Philippe; Tendal Jeppesen, Britta; Thomas, James; Turner, Tari; Vandvik, Per Olav; Grimshaw, Jeremy M. (December 2021). «Decision makers need constantly updated evidence synthesis». Nature. 600 (7889): 383–385. Bibcode:2021Natur.600..383E. doi:10.1038/d41586-021-03690-1. PMID 34912079. S2CID 245220047.
  130. ^ Snyder, Alison (14 October 2021). «New ideas are struggling to emerge from the sea of science». Axios. Retrieved 15 November 2021.
  131. ^ Chu, Johan S. G.; Evans, James A. (12 October 2021). «Slowed canonical progress in large fields of science». Proceedings of the National Academy of Sciences. 118 (41): e2021636118. Bibcode:2021PNAS..11821636C. doi:10.1073/pnas.2021636118. ISSN 0027-8424. PMC 8522281. PMID 34607941.
  132. ^ «Sharing of tacit knowledge is most important aspect of mentorship, study finds». phys.org. Retrieved 4 July 2020.
  133. ^ Ma, Yifang; Mukherjee, Satyam; Uzzi, Brian (23 June 2020). «Mentorship and protégé success in STEM fields». Proceedings of the National Academy of Sciences. 117 (25): 14077–14083. Bibcode:2020PNAS..11714077M. doi:10.1073/pnas.1915516117. ISSN 0027-8424. PMC 7322065. PMID 32522881.
  134. ^ «Science of Science authors hope to spark conversations about the scientific enterprise». phys.org. Retrieved 28 January 2023.
  135. ^ van Dijk, Peter J.; Jessop, Adrienne P.; Ellis, T. H. Noel (July 2022). «How did Mendel arrive at his discoveries?». Nature Genetics. 54 (7): 926–933. doi:10.1038/s41588-022-01109-9. ISSN 1546-1718. PMID 35817970. S2CID 250454204.
  136. ^ Root-Bernstein, Robert S.; Bernstein, Maurine; Garnier, Helen (1 April 1995). «Correlations Between Avocations, Scientific Style, Work Habits, and Professional Impact of Scientists». Creativity Research Journal. 8 (2): 115–137. doi:10.1207/s15326934crj0802_2. ISSN 1040-0419.
  137. ^ Ince, Sharon; Hoadley, Christopher; Kirschner, Paul A. (1 January 2022). «A qualitative study of social sciences faculty research workflows». Journal of Documentation. 78 (6): 1321–1337. doi:10.1108/JD-08-2021-0168. ISSN 0022-0418.
  138. ^ Nassi-Calò, Lilian (3 April 2014). «Researchers reading habits for scientific literature | SciELO in Perspective». Retrieved 25 February 2023.
  139. ^ Van Noorden, Richard (3 February 2014). «Scientists may be reaching a peak in reading habits». Nature. doi:10.1038/nature.2014.14658. Retrieved 25 February 2023.
  140. ^ Arshad, Alia; Ameen, Kanwal (1 January 2021). «Comparative analysis of academic scientists, social scientists and humanists’ scholarly information seeking habits». The Journal of Academic Librarianship. 47 (1): 102297. doi:10.1016/j.acalib.2020.102297. ISSN 0099-1333.
  141. ^ «Why it pays to join a big research group if you want to be more scientifically productive». Physics World. 24 November 2022. Retrieved 13 December 2022.
  142. ^ Zhang, Sam; Wapman, K. Hunter; Larremore, Daniel B.; Clauset, Aaron (16 November 2022). «Labor advantages drive the greater productivity of faculty at elite universities». Science Advances. 8 (46): eabq7056. arXiv:2204.05989. Bibcode:2022SciA….8.7056Z. doi:10.1126/sciadv.abq7056. ISSN 2375-2548. PMC 9674273. PMID 36399560.
  143. ^ «Academic Incentives and Research Impact: Developing Reward and Recognition Systems to Better People’s Lives». DORA. Retrieved 28 January 2023.
  144. ^ Collison, Patrick; Cowen, Tyler (30 July 2019). «We Need a New Science of Progress». The Atlantic. Retrieved 25 January 2023.
  145. ^ Lovely, Garrison. «Do we need a better understanding of ‘progress’?». BBC. Retrieved 27 January 2023.
  146. ^ Niehaus, Paul; Williams, Heidi. «Developing the science of science». Works in Progress. Retrieved 25 January 2023.
  147. ^ «Registered Replication Reports». Association for Psychological Science. Retrieved 2015-11-13.
  148. ^ Chambers, Chris (2014-05-20). «Psychology’s ‘registration revolution’«. the Guardian. Retrieved 2015-11-13.
  149. ^ Simera, I; Moher, D; Hirst, A; Hoey, J; Schulz, KF; Altman, DG (2010). «Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network». BMC Medicine. 8: 24. doi:10.1186/1741-7015-8-24. PMC 2874506. PMID 20420659.
  150. ^ Simera, I.; Moher, D.; Hoey, J.; Schulz, K. F.; Altman, D. G. (2010). «A catalogue of reporting guidelines for health research». European Journal of Clinical Investigation. 40 (1): 35–53. doi:10.1111/j.1365-2362.2009.02234.x. PMID 20055895.
  151. ^ Simera, I; Altman, DG (October 2009). «Writing a research article that is «fit for purpose»: EQUATOR Network and reporting guidelines». Evidence-Based Medicine. 14 (5): 132–134. doi:10.1136/ebm.14.5.132. PMID 19794009. S2CID 36739841.
  152. ^ Ep. 49: Joel Chan on metascience, creativity, and tools for thought.
  153. ^ «Risk of Bias Tool | Cochrane Bias». methods.cochrane.org. Retrieved 25 January 2023.
  154. ^ Prasad, Vinay; Ioannidis, John P. A. (November 2022). «Constructive and obsessive criticism in science». European Journal of Clinical Investigation. 52 (11): e13839. doi:10.1111/eci.13839. ISSN 0014-2972. PMC 9787955. PMID 35869811.
  155. ^ Khamsi, Roxanne (1 May 2020). «Coronavirus in context: Scite.ai tracks positive and negative citations for COVID-19 literature». Nature. doi:10.1038/d41586-020-01324-6. Retrieved 19 February 2022.
  156. ^ Nicholson, Josh M.; Mordaunt, Milo; Lopez, Patrice; Uppala, Ashish; Rosati, Domenic; Rodrigues, Neves P.; Grabitz, Peter; Rife, Sean C. (5 November 2021). «scite: A smart citation index that displays the context of citations and classifies their intent using deep learning». Quantitative Science Studies. 2 (3): 882–898. doi:10.1162/qss_a_00146. S2CID 232283218.
  157. ^ a b c «New bot flags scientific studies that cite retracted papers». Nature Index. Retrieved 25 January 2023.
  158. ^
  159. ^ Segado-Boj, Francisco; Martín-Quevedo, Juan; Prieto-Gutiérrez, Juan-José (12 December 2022). «Jumping over the paywall: Strategies and motivations for scholarly piracy and other alternatives» (PDF). Information Development. doi:10.1177/02666669221144429. ISSN 0266-6669. S2CID 254564205.
  160. ^ Gosztyla, Maya (7 July 2022). «How to find, read and organize papers». Nature. doi:10.1038/d41586-022-01878-7. PMID 35804061. S2CID 250388551. Retrieved 28 January 2023.
  161. ^ Fastrez, Pierre; Jacques, Jerry (2015). «Managing References by Filing and Tagging». Human Interface and the Management of Information. Information and Knowledge Design. Lecture Notes in Computer Science. Springer International Publishing. 9172: 291–300. doi:10.1007/978-3-319-20612-7_28. ISBN 978-3-319-20611-0.
  162. ^ Chaudhry, Abdus Sattar; Alajmi, Bibi M. (1 January 2022). «Personal information management practices: how scientists find and organize information». Global Knowledge, Memory and Communication. ahead-of-print (ahead-of-print). doi:10.1108/GKMC-04-2022-0082. S2CID 253363619.
  163. ^ Chang, Joseph Chee; Kim, Yongsung; Miller, Victor; Liu, Michael Xieyang; Myers, Brad A; Kittur, Aniket (12 October 2021). «Tabs.do: Task-Centric Browser Tab Management». The 34th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery: 663–676. doi:10.1145/3472749.3474777. ISBN 9781450386357. S2CID 237102658.
  164. ^ Rasberry, Lane; Tibbs, Sheri; Hoos, William; Westermann, Amy; Keefer, Jeffrey; Baskauf, Steven James; Anderson, Clifford; Walker, Philip; Kwok, Cherrie; Mietchen, Daniel (4 April 2022). «WikiProject Clinical Trials for Wikidata». doi:10.1101/2022.04.01.22273328. S2CID 247936371.
  165. ^ Moral-Muñoz, José A.; Herrera-Viedma, Enrique; Santisteban-Espejo, Antonio; Cobo, Manuel J. (19 January 2020). «Software tools for conducting bibliometric analysis in science: An up-to-date review». El Profesional de la Información. 29 (1). doi:10.3145/epi.2020.ene.03. S2CID 210926828.
  166. ^ «A new replication crisis: Research that is less likely to be true is cited more». phys.org. Retrieved 14 June 2021.
  167. ^ Serra-Garcia, Marta; Gneezy, Uri (2021-05-01). «Nonreplicable publications are cited more than replicable ones». Science Advances. 7 (21): eabd1705. Bibcode:2021SciA….7D1705S. doi:10.1126/sciadv.abd1705. ISSN 2375-2548. PMC 8139580. PMID 34020944.
  168. ^ Parker, Lisa; Boughton, Stephanie; Lawrence, Rosa; Bero, Lisa (1 November 2022). «Experts identified warning signs of fraudulent research: a qualitative study to inform a screening tool». Journal of Clinical Epidemiology. 151: 1–17. doi:10.1016/j.jclinepi.2022.07.006. PMID 35850426. S2CID 250632662.
  169. ^ Ioannidis, JPA (2016). «Why Most Clinical Research Is Not Useful». PLOS Med. 13 (6): e1002049. doi:10.1371/journal.pmed.1002049. PMC 4915619. PMID 27328301.
  170. ^ Ioannidis JA (13 July 2005). «Contradicted and initially stronger effects in highly cited clinical research». JAMA. 294 (2): 218–228. doi:10.1001/jama.294.2.218. PMID 16014596.
  171. ^ Chalmers, Iain; Glasziou, Paul (2009). «Avoidable waste in the production and reporting of research evidence». The Lancet. 374 (9683): 86–89. doi:10.1016/S0140-6736(09)60329-9. ISSN 0140-6736. PMID 19525005. S2CID 11797088.
  172. ^ June 24, Jeremy Hsu; ET, Jeremy Hsu (24 June 2010). «Dark Side of Medical Research: Widespread Bias and Omissions». Live Science. Retrieved 24 May 2019.
  173. ^ «Confronting conflict of interest». Nature Medicine. 24 (11): 1629. November 2018. doi:10.1038/s41591-018-0256-7. ISSN 1546-170X. PMID 30401866.
  174. ^ Haque, Waqas; Minhajuddin, Abu; Gupta, Arjun; Agrawal, Deepak (2018). «Conflicts of interest of editors of medical journals». PLOS ONE. 13 (5): e0197141. Bibcode:2018PLoSO..1397141H. doi:10.1371/journal.pone.0197141. ISSN 1932-6203. PMC 5959187. PMID 29775468.
  175. ^ Moncrieff, J (March 2002). «The antidepressant debate». The British Journal of Psychiatry. 180 (3): 193–194. doi:10.1192/bjp.180.3.193. ISSN 0007-1250. PMID 11872507.
  176. ^ Bello, S; Moustgaard, H; Hróbjartsson, A (October 2014). «The risk of unblinding was infrequently and incompletely reported in 300 randomized clinical trial publications». Journal of Clinical Epidemiology. 67 (10): 1059–1069. doi:10.1016/j.jclinepi.2014.05.007. ISSN 1878-5921. PMID 24973822.
  177. ^ Tuleu, Catherine; Legay, Helene; Orlu-Gul, Mine; Wan, Mandy (1 September 2013). «Blinding in pharmacological trials: the devil is in the details». Archives of Disease in Childhood. 98 (9): 656–659. doi:10.1136/archdischild-2013-304037. ISSN 0003-9888. PMC 3833301. PMID 23898156.
  178. ^ Kirsch, I (2014). «Antidepressants and the Placebo Effect». Zeitschrift für Psychologie. 222 (3): 128–134. doi:10.1027/2151-2604/a000176. ISSN 2190-8370. PMC 4172306. PMID 25279271.
  179. ^ Ioannidis, John PA (27 May 2008). «Effectiveness of antidepressants: an evidence myth constructed from a thousand randomized trials?». Philosophy, Ethics, and Humanities in Medicine. 3: 14. doi:10.1186/1747-5341-3-14. ISSN 1747-5341. PMC 2412901. PMID 18505564.
  180. ^ Moher, David; Altman, Douglas G.; Schulz, Kenneth F. (24 March 2010). «CONSORT 2010 Statement: updated guidelines for reporting parallel group randomised trials». BMJ. 340: c332. doi:10.1136/bmj.c332. ISSN 0959-8138. PMC 2844940. PMID 20332509.
  181. ^ Clarke, Michael; Chalmers, Iain (1998). «Discussion Sections in Reports of Controlled Trials Published in General Medical Journals». JAMA. 280 (3): 280–282. doi:10.1001/jama.280.3.280. PMID 9676682.
  182. ^ a b Lau, Joseph; Antman, Elliott M; Jimenez-Silva, Jeanette; Kupelnick, Bruce; Mosteller, Frederick; Chalmers, Thomas C (1992). «Cumulative Meta-Analysis of Therapeutic Trials for Myocardial Infarction». New England Journal of Medicine. 327 (4): 248–254. doi:10.1056/NEJM199207233270406. PMID 1614465.
  183. ^ Fergusson, Dean; Glass, Kathleen Cranley; Hutton, Brian; Shapiro, Stan (2016). «Randomized controlled trials of aprotinin in cardiac surgery: Could clinical equipoise have stopped the bleeding?». Clinical Trials. 2 (3): 218–229, discussion 229–232. doi:10.1191/1740774505cn085oa. PMID 16279145. S2CID 31375469.
  184. ^ Clarke, Mike; Brice, Anne; Chalmers, Iain (2014). «Accumulating Research: A Systematic Account of How Cumulative Meta-Analyses Would Have Provided Knowledge, Improved Health, Reduced Harm and Saved Resources». PLOS ONE. 9 (7): e102670. Bibcode:2014PLoSO…9j2670C. doi:10.1371/journal.pone.0102670. PMC 4113310. PMID 25068257.
  185. ^ a b Robinson, Karen A; Goodman, Steven N (2011). «A Systematic Examination of the Citation of Prior Research in Reports of Randomized, Controlled Trials». Annals of Internal Medicine. 154 (1): 50–55. doi:10.7326/0003-4819-154-1-201101040-00007. PMID 21200038. S2CID 207536137.
  186. ^ Epstein, David. «When Evidence Says No, but Doctors Say Yes — The Atlantic». Pocket. Retrieved 10 April 2020.
  187. ^ Tatsioni, A; Bonitsis, NG; Ioannidis, JP (5 December 2007). «Persistence of contradicted claims in the literature». JAMA. 298 (21): 2517–2526. doi:10.1001/jama.298.21.2517. ISSN 1538-3598. PMID 18056905.
  188. ^ Franco, Annie; Malhotra, Neil; Simonovits, Gabor (1 January 2016). «Underreporting in Psychology Experiments: Evidence From a Study Registry». Social Psychological and Personality Science. 7 (1): 8–12. doi:10.1177/1948550615598377. ISSN 1948-5506. S2CID 143182733.
  189. ^ Munafò, Marcus (29 March 2017). «Metascience: Reproducibility blues». Nature. 543 (7647): 619–620. Bibcode:2017Natur.543..619M. doi:10.1038/543619a. ISSN 1476-4687.
  190. ^ Stokstad, Erik (20 September 2018). «This research group seeks to expose weaknesses in science – and they’ll step on some toes if they have to». Science. doi:10.1126/science.aav4784. S2CID 158525979.
  191. ^ Open Science Collaboration (2015). «Estimating the reproducibility of psychological science» (PDF). Science. 349 (6251): aac4716. doi:10.1126/science.aac4716. hdl:10722/230596. PMID 26315443. S2CID 218065162.
  192. ^ a b Allen, Christopher P G.; Mehler, David Marc Anton. «Open Science challenges, benefits and tips in early career and beyond». doi:10.31234/osf.io/3czyt. S2CID 240061030.
  193. ^ Simmons, Joseph P.; Nelson, Leif D.; Simonsohn, Uri (2011). «False-Positive Psychology». Psychological Science. 22 (11): 1359–1366. doi:10.1177/0956797611417632. PMID 22006061.
  194. ^ Stroebe, Wolfgang; Strack, Fritz (2014). «The Alleged Crisis and the Illusion of Exact Replication» (PDF). Perspectives on Psychological Science. 9 (1): 59–71. doi:10.1177/1745691613514450. PMID 26173241. S2CID 31938129.
  195. ^ Aschwanden, Christie (6 December 2018). «Psychology’s Replication Crisis Has Made The Field Better». FiveThirtyEight. Retrieved 19 December 2018.
  196. ^ Cohen, Jacob (1994). «The earth is round (p < .05)». American Psychologist. 49 (12): 997–1003. doi:10.1037/0003-066X.49.12.997. S2CID 380942.
  197. ^ MacCoun, Robert; Perlmutter, Saul (8 October 2015). «Blind analysis: Hide results to seek the truth». Nature. 526 (7572): 187–189. Bibcode:2015Natur.526..187M. doi:10.1038/526187a. PMID 26450040.
  198. ^ Berlin, Meta-Research Innovation Center. «Meta-Research Innovation Center Berlin». Meta-Research Innovation Center Berlin. Retrieved 2021-12-06.
  199. ^ «Home | Meta-research Innovation Center at Stanford». metrics.stanford.edu. Retrieved 2021-12-06.
  200. ^ «Meta-research and Evidence Synthesis Unit». The George Institute for Global Health. Retrieved 2021-12-19.
  201. ^ «Metascience 2021». Metascience 2021. Retrieved 20 February 2022.

Further reading[edit]

  • Lydia Denworth, «A Significant Problem: Standard scientific methods are under fire. Will anything change?», Scientific American, vol. 321, no. 4 (October 2019), pp. 62–67.
    • «The use of p values for nearly a century [since 1925] to determine statistical significance of experimental results has contributed to an illusion of certainty and [to] reproducibility crises in many scientific fields. There is growing determination to reform statistical analysis… Some [researchers] suggest changing statistical methods, whereas others would do away with a threshold for defining «significant» results.» (p. 63.)
  • Harris, Richard (2017). Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions. Basic Books. ISBN 978-0465097913.
  • Fortunato, Santo; Bergstrom, Carl T.; et al. (2 March 2018). «Science of science». Science. 359 (6379): eaao0185. doi:10.1126/science.aao0185. PMC 5949209. PMID 29496846.

External links[edit]

Journals

  • Minerva: A Journal of Science, Learning and Policy
  • Research Integrity and Peer Review
  • Research Policy
  • Science and Public Policy

Conferences

  • Annual Metascience Conference

Всего найдено: 4

Как правильно: мета-анализ или метаанализ?

Ответ справочной службы русского языка

Верно слитное написание.

Добрый день!
Подскажите, в медицинском тексте слово «метаанализ» пишется с двумя «а» ?

Ответ справочной службы русского языка

Верное написание: метаанализ.

Как правильно: мета-анализ или метаанализ?
Спасибо.

Ответ справочной службы русского языка

Приставка _мета…_ пишется слитно. Правильно: _метаанализ_.

Какое значение имеет приставка мета-? В слове мета(?)анализ «мета» — приставка? Мета(?)анализ слитно или через дефис? Спасибо.

Ответ справочной службы русского языка

_Мета_ — приставка, пишется слитно: _метаанализ_.

Разбор частей речи

Далее давайте разберем морфологические признаки каждой из частей речи русского языка на примерах. Согласно лингвистике русского языка, выделяют три группы из 10 частей речи, по общим признакам:

1. Самостоятельные части речи:

  • существительные (см. морфологические нормы сущ. );
  • глаголы:
    • причастия;
    • деепричастия;
  • прилагательные;
  • числительные;
  • местоимения;
  • наречия;

2. Служебные части речи:

  • предлоги;
  • союзы;
  • частицы;

3. Междометия.

Ни в одну из классификаций (по морфологической системе) русского языка не попадают:

  • слова да и нет, в случае, если они выступают в роли самостоятельного предложения.
  • вводные слова: итак, кстати, итого, в качестве отдельного предложения, а так же ряд других слов.

Морфологический разбор существительного

План морфологического разбора существительного

Пример:

«Малыш пьет молоко.»

Малыш (отвечает на вопрос кто?) – имя существительное;

  • начальная форма – малыш;
  • постоянные морфологические признаки: одушевленное, нарицательное, конкретное, мужского рода, I -го склонения;
  • непостоянные морфологические признаки: именительный падеж, единственное число;
  • при синтаксическом разборе предложения выполняет роль подлежащего.

Морфологический разбор слова «молоко» (отвечает на вопрос кого? Что?).

  • начальная форма – молоко;
  • постоянная морфологическая характеристика слова: среднего рода, неодушевленное, вещественное, нарицательное, II -е склонение;
  • изменяемые признаки морфологические: винительный падеж, единственное число;
  • в предложении прямое дополнение.

Приводим ещё один образец, как сделать морфологический разбор существительного, на основе литературного источника:

«Две дамы подбежали к Лужину и помогли ему встать. Он ладонью стал сбивать пыль с пальто. (пример из: «Защита Лужина», Владимир Набоков).»

Дамы (кто?) — имя существительное;

  • начальная форма — дама;
  • постоянные морфологические признаки: нарицательное, одушевленное, конкретное, женского рода, I склонения;
  • непостоянная морфологическая характеристика существительного: единственное число, родительный падеж;
  • синтаксическая роль: часть подлежащего.

Лужину (кому?) — имя существительное;

  • начальная форма — Лужин;
  • верная морфологическая характеристика слова: имя собственное, одушевленное, конкретное, мужского рода, смешанного склонения;
  • непостоянные морфологические признаки существительного: единственное число, дательного падежа;
  • синтаксическая роль: дополнение.

Ладонью (чем?) — имя существительное;

  • начальная форма — ладонь;
  • постоянные морфологические признаки: женского рода, неодушевлённое, нарицательное, конкретное, I склонения;
  • непостоянные морфо. признаки: единственного числа, творительного падежа;
  • синтаксическая роль в контексте: дополнение.

Пыль (что?) — имя существительное;

  • начальная форма — пыль;
  • основные морфологические признаки: нарицательное, вещественное, женского рода, единственного числа, одушевленное не охарактеризовано, III склонения (существительное с нулевым окончанием);
  • непостоянная морфологическая характеристика слова: винительный падеж;
  • синтаксическая роль: дополнение.

(с) Пальто (С чего?) — существительное;

  • начальная форма — пальто;
  • постоянная правильная морфологическая характеристика слова: неодушевленное, нарицательное, конкретное, среднего рода, несклоняемое;
  • морфологические признаки непостоянные: число по контексту невозможно определить, родительного падежа;
  • синтаксическая роль как члена предложения: дополнение.

Морфологический разбор прилагательного

Имя прилагательное — это знаменательная часть речи. Отвечает на вопросы Какой? Какое? Какая? Какие? и характеризует признаки или качества предмета. Таблица морфологических признаков имени прилагательного:

  • начальная форма в именительном падеже, единственного числа, мужского рода;
  • постоянные морфологические признаки прилагательных:
    • разряд, согласно значению:
      • — качественное (теплый, молчаливый);
      • — относительное (вчерашний, читальный);
      • — притяжательное (заячий, мамин);
    • степень сравнения (для качественных, у которых этот признак постоянный);
    • полная / краткая форма (для качественных, у которых этот признак постоянный);
  • непостоянные морфологические признаки прилагательного:
    • качественные прилагательные изменяются по степени сравнения (в сравнительных степенях простая форма, в превосходных — сложная): красивый-красивее-самый красивый;
    • полная или краткая форма (только качественные прилагательные);
    • признак рода (только в единственном числе);
    • число (согласуется с существительным);
    • падеж (согласуется с существительным);
  • синтаксическая роль в предложении: имя прилагательное бывает определением или частью составного именного сказуемого.

План морфологического разбора прилагательного

Пример предложения:

Полная луна взошла над городом.

Полная (какая?) – имя прилагательное;

  • начальная форма – полный;
  • постоянные морфологические признаки имени прилагательного: качественное, полная форма;
  • непостоянная морфологическая характеристика: в положительной (нулевой) степени сравнения, женский род (согласуется с существительным), именительный падеж;
  • по синтаксическому анализу — второстепенный член предложения, выполняет роль определения.

Вот еще целый литературный отрывок и морфологический разбор имени прилагательного, на примерах:

Девушка была прекрасна: стройная, тоненькая, глаза голубые, как два изумительных сапфира, так и заглядывали к вам в душу.

Прекрасна (какова?) — имя прилагательное;

  • начальная форма — прекрасен (в данном значении);
  • постоянные морфологические нормы: качественное, краткое;
  • непостоянные признаки: положительная степень сравнения, единственного числа, женского рода;
  • синтаксическая роль: часть сказуемого.

Стройная (какая?) — имя прилагательное;

  • начальная форма — стройный;
  • постоянные морфологические признаки: качественное, полное;
  • непостоянная морфологическая характеристика слова: полное, положительная степень сравнения, единственное число, женский род, именительный падеж;
  • синтаксическая роль в предложении: часть сказуемого.

Тоненькая (какая?) — имя прилагательное;

  • начальная форма — тоненький;
  • морфологические постоянные признаки: качественное, полное;
  • непостоянная морфологическая характеристика прилагательного: положительная степень сравнения, единственное число, женского рода, именительного падежа;
  • синтаксическая роль: часть сказуемого.

Голубые (какие?) — имя прилагательное;

  • начальная форма — голубой;
  • таблица постоянных морфологических признаков имени прилагательного: качественное;
  • непостоянные морфологические характеристики: полное, положительная степень сравнения, множественное число, именительного падежа;
  • синтаксическая роль: определение.

Изумительных (каких?) — имя прилагательное;

  • начальная форма — изумительный;
  • постоянные признаки по морфологии: относительное, выразительное;
  • непостоянные морфологические признаки: множественное число, родительного падежа;
  • синтаксическая роль в предложении: часть обстоятельства.

Морфологические признаки глагола

Согласно морфологии русского языка, глагол — это самостоятельная часть речи. Он может обозначать действие (гулять), свойство (хромать), отношение (равняться), состояние (радоваться), признак (белеться, красоваться) предмета. Глаголы отвечают на вопрос что делать? что сделать? что делает? что делал? или что будет делать? Разным группам глагольных словоформ присущи неоднородные морфологические характеристики и грамматические признаки.

Морфологические формы глаголов:

  • начальная форма глагола — инфинитив. Ее так же называют неопределенная или неизменяемая форма глагола. Непостоянные морфологические признаки отсутствуют;
  • спрягаемые (личные и безличные) формы;
  • неспрягаемые формы: причастные и деепричастные.

Морфологический разбор глагола

  • начальная форма — инфинитив;
  • постоянные морфологические признаки глагола:
    • переходность:
      • переходный (употребляется с существительными винительного падежа без предлога);
      • непереходный (не употребляется с существительным в винительном падеже без предлога);
    • возвратность:
      • возвратные (есть -ся, -сь);
      • невозвратные (нет -ся, -сь);
    • вид:
      • несовершенный (что делать?);
      • совершенный (что сделать?);
    • спряжение:
      • I спряжение (дела-ешь, дела-ет, дела-ем, дела-ете, дела-ют/ут);
      • II спряжение (сто-ишь, сто-ит, сто-им, сто-ите, сто-ят/ат);
      • разноспрягаемые глаголы (хотеть, бежать);
  • непостоянные морфологические признаки глагола:
    • наклонение:
      • изъявительное: что делал? что сделал? что делает? что сделает?;
      • условное: что делал бы? что сделал бы?;
      • повелительное: делай!;
    • время (в изъявительном наклонении: прошедшее/настоящее/будущее);
    • лицо (в настоящем/будущем времени, изъявительного и повелительного наклонения: 1 лицо: я/мы, 2 лицо: ты/вы, 3 лицо: он/они);
    • род (в прошедшем времени, единственного числа, изъявительного и условного наклонения);
    • число;
  • синтаксическая роль в предложении. Инфинитив может быть любым членом предложения:
    • сказуемым: Быть сегодня празднику;
    • подлежащим :Учиться всегда пригодится;
    • дополнением: Все гости просили ее станцевать;
    • определением: У него возникло непреодолимое желание поесть;
    • обстоятельством: Я вышел пройтись.

Морфологический разбор глагола пример

Чтобы понять схему, проведем письменный разбор морфологии глагола на примере предложения:

Вороне как-то Бог послал кусочек сыру… (басня, И. Крылов)

Послал (что сделал?) — часть речи глагол;

  • начальная форма — послать;
  • постоянные морфологические признаки: совершенный вид, переходный, 1-е спряжение;
  • непостоянная морфологическая характеристика глагола: изъявительное наклонение, прошедшего времени, мужского рода, единственного числа;
  • синтаксическая роль в предложении: сказуемое.

Следующий онлайн образец морфологического разбора глагола в предложении:

Какая тишина, прислушайтесь.

Прислушайтесь (что сделайте?) — глагол;

  • начальная форма — прислушаться;
  • морфологические постоянные признаки: совершенный вид, непереходный, возвратный, 1-го спряжения;
  • непостоянная морфологическая характеристика слова: повелительное наклонение, множественное число, 2-е лицо;
  • синтаксическая роль в предложении: сказуемое.

План морфологического разбора глагола онлайн бесплатно, на основе примера из целого абзаца:

— Его нужно предостеречь.

— Не надо, пусть знает в другой раз, как нарушать правила.

— Что за правила?

— Подождите, потом скажу. Вошел! («Золотой телёнок», И. Ильф)

Предостеречь (что сделать?) — глагол;

  • начальная форма — предостеречь;
  • морфологические признаки глагола постоянные: совершенный вид, переходный, невозвратный, 1-го спряжения;
  • непостоянная морфология части речи: инфинитив;
  • синтаксическая функция в предложении: составная часть сказуемого.

Пусть знает (что делает?) — часть речи глагол;

  • начальная форма — знать;
  • постоянные морфологические признаки: несовершенный вид, невозвратный, переходный, 1-го спряжения;
  • непостоянная морфология глагола: повелительное наклонение, единственного числа, 3-е лицо;
  • синтаксическая роль в предложении: сказуемое.

Нарушать (что делать?) — слово глагол;

  • начальная форма — нарушать;
  • постоянные морфологические признаки: несовершенный вид, невозвратный, переходный, 1-го спряжения;
  • непостоянные признаки глагола: инфинитив (начальная форма);
  • синтаксическая роль в контексте: часть сказуемого.

Подождите (что сделайте?) — часть речи глагол;

  • начальная форма — подождать;
  • постоянные морфологические признаки: совершенный вид, невозвратный, переходный, 1-го спряжения;
  • непостоянная морфологическая характеристика глагола: повелительное наклонение, множественного числа, 2-го лица;
  • синтаксическая роль в предложении: сказуемое.

Вошел (что сделал?) — глагол;

  • начальная форма — войти;
  • постоянные морфологические признаки: совершенный вид, невозвратный, непереходный, 1-го спряжения;
  • непостоянная морфологическая характеристика глагола: прошедшее время, изъявительное наклонение, единственного числа, мужского рода;
  • синтаксическая роль в предложении: сказуемое.

Метанаука (также известная как мета-исследование или исследование, основанное на фактах ) — это использование научной методологии для изучения самой науки . Metascience стремится повысить качество научных исследований при одновременном сокращении отходов. Он также известен как « исследование в области исследований » и « наука о науке », поскольку использует методы исследования для изучения того, как проводятся исследования и где можно улучшить. Метанаука занимается всеми областями исследований и была описана как «взгляд на науку с высоты птичьего полета». По словам Джона Иоаннидиса , «Наука — лучшее, что случилось с людьми … но мы можем делать это лучше».

В 1966 году в одном из первых метаисследований были рассмотрены статистические методы 295 статей, опубликованных в десяти известных медицинских журналах. Было обнаружено, что «почти 73% прочитанных отчетов … выводы были сделаны, когда обоснование этих выводов было недействительным». Мета-исследования в последующие десятилетия выявили множество методологических недостатков, неэффективности и неэффективной практики в исследованиях во многих научных областях. Многие научные исследования не могли быть воспроизведены , особенно в медицине и мягких науках . Термин « кризис репликации » был придуман в начале 2010-х годов как часть растущего осознания проблемы.

Были приняты меры для решения проблем, выявленных метанаукой. Эти меры включают предварительную регистрацию научных исследований и клинических испытаний, а также создание таких организаций, как CONSORT и EQUATOR Network, которые издают руководящие принципы по методологии и отчетности. Продолжаются усилия по сокращению злоупотребления статистикой , устранению порочных стимулов со стороны академических кругов, совершенствованию процесса экспертной оценки , борьбе с предвзятостью в научной литературе и повышению общего качества и эффективности научного процесса.

Часть серии по
Доказательная практика
  • Оценка
  • Сохранение
  • дизайн
  • Стоматология
  • Образование
  • Законодательство
  • Библиотечно-информационная практика
  • Управление
  • Медицинская этика
  • Лекарство
  • Уход
  • Аптека в развивающихся странах
  • Филантропия
  • Политика
  • Полицейская
  • Обвинение
  • Исследовательская работа
  • Планирование
  • Токсикология
  • v
  • т
  • е

История

В 1966 году в одном из первых метаисследований были рассмотрены статистические методы 295 статей, опубликованных в десяти известных медицинских журналах. Было обнаружено, что «почти 73% прочитанных отчетов … выводы были сделаны, когда обоснование этих выводов было недействительным». В 2005 году Джон Иоаннидис опубликовал статью под названием « Почему большинство опубликованных результатов исследований ложны », в которой утверждалось, что большинство статей в области медицины приводят к ошибочным выводам. Статья стала самой загружаемой в Публичной научной библиотеке и считается основополагающей в области метанауки. Более поздние мета-исследования выявили широко распространенные трудности с воспроизведением результатов во многих областях науки, включая психологию и медицину . Эта проблема получила название « кризис репликации ». Метанаука выросла как реакция на кризис репликации и озабоченность по поводу расточительства в исследованиях.

Многие известные издатели заинтересованы в метаисследованиях и повышении качества своих публикаций. Ведущие журналы, такие как Science , The Lancet и Nature , постоянно освещают метаисследования и проблемы с воспроизводимостью. В 2012 году PLOS ONE запустила Инициативу воспроизводимости. В 2015 году Biomed Central представил контрольный список минимальных стандартов отчетности для четырех заголовков.

Первой международной конференцией в широкой области метаисследований была конференция Research Waste / EQUATOR, проведенная в Эдинбурге в 2015 году; Первой международной конференцией по экспертной оценке стал Конгресс по экспертной оценке, проведенный в 1989 году. В 2016 году была запущена конференция « Целостность исследований и экспертная оценка» . Вступительная редакционная статья журнала призвала к «исследованию, которое расширит наше понимание и предложит потенциальные решения проблем, связанных с рецензированием, отчетами об исследованиях и этикой исследований и публикаций».

Области мета-исследований

Метанауку можно разделить на пять основных областей интересов: методы, отчетность, воспроизводимость, оценка и стимулы. Они соответствуют, соответственно, тому, как проводить, общаться, проверять, оценивать и вознаграждать исследования.

Методы

Метанаука стремится выявить неэффективные методы исследований, в том числе предвзятость в исследованиях, плохой дизайн исследований, злоупотребление статистикой , и найти методы уменьшения этих практик. Мета-исследования выявили многочисленные предубеждения в научной литературе. Особо следует отметить широко распространенное неправильное использование p-значений и злоупотребление статистической значимостью .

Составление отчетов

Мета-исследования выявили неэффективные методы отчетности, объяснения, распространения и популяризации исследований, особенно в области социальных наук и наук о здоровье. Плохая отчетность затрудняет точную интерпретацию результатов научных исследований, тиражирование исследований и выявление предубеждений и конфликтов интересов у авторов. Решения включают внедрение стандартов отчетности и большую прозрачность научных исследований (включая более строгие требования к раскрытию информации о конфликтах интересов). Предпринята попытка стандартизировать отчетность по данным и методологии путем создания руководящих принципов такими агентствами, как CONSORT и более крупной сетью EQUATOR .

Воспроизводимость

Кризис репликации — это продолжающийся методологический кризис, в ходе которого было обнаружено, что многие научные исследования трудно или невозможно воспроизвести . Хотя корни кризиса уходят в мета-исследования середины и конца 1900-х годов, фраза «кризис репликации» не использовалась до начала 2010-х годов как часть растущего осознания проблемы. Кризис репликации особенно затрагивает психологию (особенно социальную психологию ) и медицину . Репликация — важная часть научного процесса, и широко распространенные сбои репликации ставят под сомнение надежность затронутых полей.

Более того, тиражирование исследования (или невозможность его тиражирования) считается менее влиятельным, чем оригинальное исследование, и с меньшей вероятностью будет опубликовано во многих областях. Это мешает сообщать об исследованиях и даже пытаться их воспроизвести.

Оценка

Metascience стремится создать научную основу для экспертной оценки. Мета-исследование оценивает системы рецензирования, включая рецензирование до публикации, рецензирование после публикации и открытое рецензирование . Он также стремится разработать лучшие критерии финансирования исследований.

Поощрения

Метанаука стремится продвигать лучшие исследования с помощью более эффективных систем стимулирования. Это включает изучение точности, эффективности, затрат и преимуществ различных подходов к ранжированию и оценке исследований и тех, кто их выполняет. Критики утверждают, что извращенные стимулы создали в академических кругах среду « опубликовай или исчезни», которая способствует производству мусорной науки , низкокачественных исследований и ложных срабатываний . По словам Брайана Носека , «проблема, с которой мы сталкиваемся, заключается в том, что система стимулирования почти полностью ориентирована на публикацию исследований, а не на их правильное проведение». Сторонники реформы стремятся структурировать систему стимулов для достижения более качественных результатов.

Реформы

Мета-исследования по выявлению недостатков в научной практике вдохновили на реформы в науке. Эти реформы направлены на решение и устранение проблем в научной практике, которые приводят к некачественным или неэффективным исследованиям.

Предварительная регистрация

Практика регистрации научного исследования до его проведения называется предварительной регистрацией . Он возник как средство преодоления кризиса репликации . Предварительная регистрация требует подачи зарегистрированного отчета, который затем принимается к публикации или отклоняется журналом на основе теоретического обоснования, экспериментального плана и предлагаемого статистического анализа. Предварительная регистрация исследований служит для предотвращения предвзятости публикации , уменьшения углубления данных и повышения воспроизводимости.

Стандарты отчетности

Исследования, демонстрирующие низкую согласованность и качество отчетности, продемонстрировали необходимость в стандартах отчетности и руководящих принципах в науке, что привело к появлению организаций, разрабатывающих такие стандарты, таких как CONSORT (Consolidated Standards of Reporting Trials) и EQUATOR Network .

ЭКВАТОР ( E nhancing в ква Лити и T ransparency O F здоровья R сследование) Сеть является международной инициативой , направленной на содействие прозрачной и точную отчетности исследований в области здравоохранения исследований с целью повышения ценности и надежности медицинской исследовательской литературы. Сеть EQUATOR была создана с целью повышения осведомленности о важности качественной отчетности об исследованиях, оказания помощи в разработке, распространении и реализации руководящих принципов отчетности для различных типов дизайнов исследований, мониторинга состояния качества отчетности об исследованиях в литература по наукам о здоровье и проведение исследований, касающихся вопросов, влияющих на качество отчетности об исследованиях в области здравоохранения. Сеть действует как «зонтичная» организация, объединяющая разработчиков руководств по составлению отчетов, редакторов медицинских журналов и рецензентов, органы, финансирующие исследования, и другие ключевые заинтересованные стороны, имеющие взаимный интерес в повышении качества исследовательских публикаций и самих исследований.

Приложения

Лекарство

Клинические исследования в медицине часто бывают низкого качества, и многие исследования невозможно воспроизвести. По оценкам, 85% финансирования исследований тратится впустую. Кроме того, наличие предвзятости влияет на качество исследования. Фармацевтическая промышленность оказывает существенное влияние на разработку и выполнение медицинских исследований. Конфликты интересов распространены среди авторов медицинской литературы и среди редакторов медицинских журналов. Хотя почти все медицинские журналы требуют, чтобы их авторы раскрывали информацию о конфликте интересов, редакторы не обязаны этого делать. Финансовые конфликты интересов связаны с более высокими показателями положительных результатов исследований. В исследованиях антидепрессантов спонсорство фармацевтических препаратов является лучшим предиктором результатов исследования.

Ослепление — еще одно направление метаисследований, поскольку ошибка, вызванная плохим ослеплением, является источником экспериментальной ошибки . В медицинской литературе мало информации о слепоте, и широко распространенное неправильное понимание предмета привело к плохой реализации ослепления в клинических испытаниях . Кроме того, неэффективность ослепления редко измеряется или сообщается. Исследования, показывающие неэффективность ослепления в испытаниях антидепрессантов , заставили некоторых ученых утверждать, что антидепрессанты не лучше плацебо . В свете мета-исследований, показывающих неэффективность ослепления, стандарты CONSORT рекомендуют, чтобы все клинические испытания оценивали и сообщали о качестве ослепления.

Исследования показали, что систематические обзоры существующих научных данных неоптимально используются при планировании нового исследования или подведении итогов. Кумулятивный метаанализ исследований, оценивающих эффективность медицинских вмешательств, показал, что многих клинических испытаний можно было бы избежать, если бы до проведения нового исследования был проведен систематический обзор существующих доказательств. Например, Lau et al. проанализировали 33 клинических испытания (с участием 36974 пациентов) по оценке эффективности внутривенной стрептокиназы при остром инфаркте миокарда . Их совокупный мета-анализ показал, что 25 из 33 испытаний можно было бы избежать, если бы систематический обзор проводился до проведения нового испытания. Другими словами, в рандомизации 34542 пациентов не было необходимости. Одно исследование проанализировало 1523 клинических испытания, включенных в 227 метаанализов, и пришло к выводу, что было процитировано «менее четверти соответствующих предыдущих исследований». Они также подтвердили ранее сделанные выводы о том, что в большинстве отчетов о клинических испытаниях нет систематического обзора для обоснования исследования или обобщения результатов.

Доказано, что многие методы лечения, используемые в современной медицине, неэффективны или даже вредны. Исследование, проведенное Джоном Иоаннидисом в 2007 году, показало, что медицинскому сообществу потребовалось в среднем десять лет, чтобы перестать ссылаться на популярные практики после того, как их эффективность была однозначно опровергнута.

Психология

Метанаука выявила значительные проблемы в психологических исследованиях. Поле страдает от высокой систематической ошибки, низкой воспроизводимости и широко распространенного неправильного использования статистических данных . Кризис репликации влияет на психологию сильнее, чем на любую другую область; Две трети получивших широкую огласку результатов невозможно воспроизвести. Мета-исследования показывают, что 80-95% психологических исследований подтверждают их первоначальные гипотезы, что явно подразумевает наличие предвзятости публикации .

Кризис репликации привел к возобновлению усилий по повторной проверке важных результатов. В ответ на опасения по поводу предвзятости публикаций и р- хакерства более 140 психологических журналов приняли независимое рецензирование результатов , в котором исследования предварительно регистрируются и публикуются без учета их результатов. Анализ этих реформ показал, что 61 процент слепых исследований дает нулевые результаты по сравнению с 5–20 процентами в более ранних исследованиях. Этот анализ показывает, что экспертная оценка, не зависящая от результатов, существенно снижает предвзятость публикации.

Психологи обычно путают статистическую значимость с практической значимостью, с энтузиазмом сообщая о большой уверенности в неважных фактах. Некоторые психологи ответили более частым использованием статистических данных о величине эффекта , вместо того, чтобы полагаться исключительно на значения p .

Физика

Ричард Фейнман отметил, что оценки физических констант были ближе к опубликованным значениям, чем можно было бы ожидать случайно. Считалось, что это результат предвзятости подтверждения : результаты, согласующиеся с существующей литературой, с большей вероятностью поверили и, следовательно, опубликовали. Физики теперь применяют ослепление, чтобы предотвратить такую ​​предвзятость.

Связанные поля

Журналистика

Журналистика, также известная как наука о публикациях, представляет собой научное исследование всех аспектов академического издательского процесса. Эта область направлена ​​на повышение качества научных исследований путем внедрения научно обоснованных практик в академические публикации. Термин «journalology» был придуман Стивеном Замка , бывший редактор главный из в BMJ . Первый Конгресс экспертных оценок , состоявшийся в 1989 году в Чикаго , штат Иллинойс , считается поворотным моментом в становлении журналистики как отдельной области. Журналистика сыграла важную роль в продвижении предварительной регистрации исследований в науке, особенно в клинических исследованиях . В большинстве стран сейчас ожидается регистрация клинических испытаний .

Наукометрия

Наукометрия занимается измерением библиографических данных в научных публикациях. Основные исследовательские проблемы включают измерение влияния исследовательских работ и академических журналов, понимание научных цитат и использование таких измерений в контексте политики и управления.

Научная наука о данных

Научная наука о данных — это использование науки о данных для анализа исследовательских работ. Он включает как качественные, так и количественные методы. Исследования в области науки о данных включают обнаружение мошенничества и анализ сети цитирования .

Смотрите также

  • Эпистемология
  • Доказательная практика
  • Доказательная медицина
  • Доказательная политика
  • Требуются дальнейшие исследования
  • Список исследовательских центров метанауки
  • Логология (наука)
  • Метатеория
  • Открытая наука
  • Наука о научной политике
  • Социология научного знания
  • Самостоятельное выделение средств

Ссылки

дальнейшее чтение

  • Лидия Денуорт, «Серьезная проблема: стандартные научные методы подвергаются критике. Изменится ли что-нибудь?», Scientific American , vol. 321, нет. 4 (октябрь 2019 г.), стр. 62–67. «Использование р значений в течение почти столетия [после 1925] для определения статистической значимости из экспериментальных результатов способствовало иллюзии определенности и [к] воспроизводимости кризисам во многих областях науки . Существует растущая решимость реформ статистического анализ … Некоторые [исследователи] предлагают изменить статистические методы, в то время как другие отказались бы от порога для определения «значимых» результатов ». (стр.63)
  • Харрис, Ричард (2017). Rigor Mortis: Как небрежная наука создает бесполезные лекарства, разрушает надежды и тратит впустую миллиарды . Основные книги. ISBN 9780465097913.

внешние ссылки

Журналы

  • Минерва: журнал науки, обучения и политики
  • Целостность исследования и экспертная оценка
  • Политика исследования
  • Наука и государственная политика

Конференции

  • Симпозиум Metascience2019 в Стэнфорде

Мета-анализ (англ. meta-analysis) — в статистике это объединение результатов нескольких исследований для анализа набора связанных между собой научных гипотез.

В разных случаях для повторного статистического анализа отдельных исследований используют либо первичные данные оригинальных исследований либо обобщают опубликованные (вторичные) результаты исследований, посвященных одной проблеме.[1] Мета-анализ является частым, но не обязательным компонентом систематического обзора (en:systematic review).

Термин «мета-анализ» был предложен американским статистиком Джином Глассом.[2]

История

Первый мета-анализ был осуществлен Карлом Пирсоном в 1904 году, который пытался разрешить проблему низкой статистической мощности в исследованиях с небольшим размером выборки. Пирсон анализировал результаты нескольких исследований для того, чтобы получить более точные данные.[3][4] Первый мета-анализ был проведен в 1940 году в издании Шестьдесят лет экстрасенсорного восприятия (англ. Extra-sensory perception after sixty years), в котором были объединены результаты идентичных экспериментов, выполненных независимыми исследователями. Авторы книги — психологи Университета Дьюка — Джозеф Пратт и Джозеф Райн с соавторами.[5] Мета-анализ был выполнен на основе 145 статей об экстрасенсорном восприятии, опубликованных в период с 1882 по 1939 год и содержал оценку влияния неопубликованных данных. Хотя мета-анализ в настоящее время широко используется в эпидемиологии и доказательной медицине, подобных исследований в области медицины не проводили до 1955 году. В 1970-е годы более сложные аналитические методы были введены в исследования в области образования Д. В. Глассом, Ф. Л. Шмидтом, Дж. Е. Хантером.

Джин Гласс был первым современным статистиком, формализовавшим использование мета-анализа, и считается современным основателем этого метода. Первое использование термина согласно Oxford English Dictionary было совершено Глассом в 1976 году.[2][6] Статистические теории, посвященные мета-анализу были значительно развиты в работах Nambury S. Raju, Larry V. Hedges, Harris Cooper, Ingram Olkin, John E. Hunter, Jacob Cohen, Thomas C. Chalmers, Robert Rosenthal, Frank L. Schmidt.

Преимущества

Преимущества мета-анализа (над обзорами литературы и др.):

  • указывает, что выборка более разнообразна, чем предполагалось исходя из разнообразия образцов
  • обобщение нескольких исследований
  • контроль разнообразия между исследованиями
  • может объяснять разнообразие между данными
  • увеличение статистической мощности
  • работает в условиях избытка информации — каждый год публикуется большое количество статей
  • обобщает несколько исследований и поэтому меньше зависит от отдельных находок, чем индивидуальные исследования
  • может обнаруживать систематические ошибки

Этапы

  1. Формулирование задачи
  2. Изучение литературных данных
  3. Отбор исследований (критерии включения)
    • Включение основано на качественных критериях, например, наличие рандомизации и слепого контроля в клинических исследованиях
    • Отбор отдельных исследований, (по объектам), например, лечение рака молочной железы
    • Включение или невключение неопубликованных данных
  4. Решение о том, какие зависимые не включаются в мета-анализ
    • Различия (дискретные данные)
    • Средние (непрерывные данные)
  5. Выбор модели (см далее)

Примечания

  1. http://www.statsoft.ru/statportal/tabID__50/MId__449/ModeID__0/PageID__353/DesktopDefault.aspx Подходы к выполнению мета-анализа
  2. 1 2 Glass, G. V (1976). «Primary, secondary, and meta-analysis of research». Educational Researcher, 5, 3-8.
  3. O’Rourke, Keith (2007-12-01). «An historical perspective on meta-analysis: dealing quantitatively with varying study results». J R Soc Med 100 (12): 579–582. DOI:10.1258/jrsm.100.12.579. PMID 18065712. Проверено 2009-09-10.
  4. Egger, M; G D Smith (1997-11-22). «Meta-Analysis. Potentials and promise». BMJ (Clinical Research Ed.) 315 (7119): 1371–1374. ISSN 0959-8138. PMID 9432250. Проверено 2009-09-10.
  5. Bösch, H. (2004). Reanalyzing a meta-analysis on extra-sensory perception dating from 1940, the first comprehensive meta-analysis in the history of science. In S. Schmidt (Ed.), Proceedings of the 47th Annual Convention of the Parapsychological Association, University of Vienna, (pp. 1—13)
  6. meta-analysis. Oxford English Dictionary. Oxford University Press. Draft Entry June 2008. Accessed 28 March 2009. «1976 G. V. Glass in Educ. Res. Nov. 3/2 My major interest currently is in what we have come to call..the meta-analysis of research. The term is a bit grand, but it is precise and apt… Meta-analysis refers to the analysis of analyses.»

Литература

  • Cornell, J. E. & Mulrow, C. D. (1999). Meta-analysis. In: H. J. Adèr & G. J. Mellenbergh (Eds). Research Methodology in the social, behavioral and life sciences (pp. 285—323). London: Sage.
  • Norman, S.-L. T. (1999). Tutorial in Biostatistics. Meta-Analysis: Formulating, Evaluating, Combining, and Reporting. Statistics in Medicine, 18, 321—359.
  • Sutton, A.J., Jones, D.R., Abrams, K.R., Sheldon, T.A., & Song, F. (2000). Methods for Meta-analysis in Medical Research. London: John Wiley. ISBN 0-471-49066-0
  • Wilson, D. B., & Lipsey, M. W. (2001). Practical meta-analysis. Thousand Oaks: Sage publications. ISBN 0761921680
  • Owen, A. B. (2009). «Karl Pearson’s meta-analysis revisited». Annals of Statistics, 37 (6B), 3867—3892. Supplementary report.
  • Ellis, Paul D. (2010). The Essential Guide to Effect Sizes: An Introduction to Statistical Power, Meta-Analysis and the Interpretation of Research Results. United Kingdom: Cambridge University Press. ISBN 0521142466
  • Bonett, D.G. (2009). Meta-analytic interval estimation for standardized and unstandardized mean differences, Psychological Methods, 14, 225—238.

Ссылки

  • Пособие Cochrane для систематического анализа
 Просмотр этого шаблона Статистические показатели
Описательная
статистика
Непрерывные
данные
Коэффициент сдвига Среднее (Арифметическое, Геометрическое, Гармоническое) · Медиана · Мода · Размах
Вариация Ранг · Среднеквадратическое отклонение · Коэффициент вариации · Квантиль (Дециль, Процентиль/Перцентиль/Центиль)
Моменты Математическое ожидание · Дисперсия · Асимметрия · Эксцесс
Дискретные
данные
Частота · Таблица контингентности
Статистический
вывод и
проверка
гипотез
Статистический
вывод
Доверительный интервал (Частотная вероятность) · Достоверный интервал (Байесовский вывод) · Статистическая значимость · Мета-анализ
Планирование
эксперимента
Генеральная совокупность · Планирование выборки · Районированная выборка · Репликация · Группировка · Чувствительность и специфичность
Объём выборки Статистическая мощность · Мера эффекта · Стандартная ошибка
Общая оценка Байесовская оценка решения · Метод максимального правдоподобия · Метод моментов нахождения оценок · Оценка минимального расстояния · Оценка максимального интервала
Статистические
критерии
Z-тест · t-критерий Стьюдента · Критерий Фишера · Критерий Пирсона (Хи-квадрат) · Критерий согласия Колмогорова · Тест Вальда · U-критерий Манна — Уитни · Критерий Уилкоксона · Критерий Краскела — Уоллиса · Критерий Кохрена · Критерий Лиллиефорса
Анализ выживания Функция выживания · Оценка Каплана — Мейера · Логранк-тест · Интенсивность отказов · Пропорциональная модель опасностей
Корреляция Коэффициент корреляции Пирсона · Ранг корреляций (Коэффициент Спирмана для ранга корреляций, Коэффициент тау Кендалла для ранга корреляций) · Переменная смешивания
Линейные модели Основная линейная модель · Обобщённая линейная модель · Анализ вариаций · Ковариационный анализ
Регрессия Линейная · Нелинейная · Непараметрическая регрессия · Полупараметрическая регрессия · Логистическая регрессия

Столбчатая диаграмма · Совмещённая диаграмма · Диаграмма управления · Лесная диаграмма · Гистограмма · Q-Q диаграмма · Диаграмма выполнения · Диаграмма разброса · Стебель-листья · Ящик с усами

Использование научной методологии для изучения самой науки

Метанаука (также известная как мета-исследование или научно-обоснованное исследование ) — это использование научной методологии для изучения самой науки. Metascience стремится повысить качество научных исследований при одновременном сокращении отходов. Он также известен как «исследование в области исследований» и «наука о науке», так как использует методы исследования для изучения того, как исследования проводятся и где можно улучшить. Метанаука занимается всеми областями исследований и была описана как «взгляд на науку с высоты птичьего полета». По словам Джона Иоаннидиса, «Наука — лучшее, что случилось с людьми… но мы можем делать это лучше».

В 1966 году было проведено раннее метаисследование В статье исследованы статистические методы 295 статей, опубликованных в десяти известных медицинских журналах. Было обнаружено, что «почти 73% прочитанных отчетов… выводы были сделаны, когда обоснование этих выводов было недействительным». Мета-исследования в последующие десятилетия выявили множество методологических недостатков, неэффективности и неэффективной практики в исследованиях во многих научных областях. Многие научные исследования не могли быть воспроизведены, особенно в медицине и мягких науках. Термин «кризис репликации » был придуман в начале 2010-х годов как часть растущего осознания проблемы.

Были приняты меры для решения проблем, выявленных метанаукой. Эти меры включают предварительную регистрацию научных исследований и клинических испытаний, а также создание таких организаций, как CONSORT и EQUATOR Network которые издают руководящие принципы по методологии и отчетности. Продолжаются усилия по сокращению злоупотребления статистикой, устранению порочных стимулов со стороны академических кругов, совершенствованию процесса экспертной оценки, борьбе с предвзятостью в научной литературе, а также для повышения общего качества и эффективности научного процесса.

Содержание

  • 1 История
  • 2 Области метаисследований
    • 2.1 Методы
    • 2.2 Отчетность
    • 2.3 Воспроизводимость
    • 2.4 Оценка
    • 2.5 Стимулы
  • 3 Реформы
    • 3.1 Предварительная регистрация
    • 3.2 Стандарты отчетности
  • 4 Приложения
    • 4.1 Медицина
    • 4.2 Психология
    • 4.3 Физика
  • 5 Связанные области
    • 5.1 Журналистика
    • 5.2 Наукометрия
    • 5.3 Научная наука о данных
  • 6 См. Также
  • 7 Ссылки
  • 8 Дополнительная литература
  • 9 Внешние ссылки

История

Джон Иоаннидис (2005), «Почему большинство публикаций Результаты исследований ложны «.

В 1966 году в одном из первых метаисследований были рассмотрены статистические методы 295 статей, опубликованных в десяти известных медицинских журналах. Было обнаружено, что «почти в 73% отчетов прочтите… выводы были сделаны, когда обоснование этих выводов было недействительным «. В 2005 году Джон Иоаннидис опубликовал статью под названием» Почему большинство опубликованных результатов исследований ложны «, в которой утверждалось что большинство документов в Медицинская сфера дает неверные выводы. Статья стала самой загружаемой в Публичной научной библиотеке и считается основополагающей в области метанауки. Более поздние мета-исследования выявили широко распространенные трудности в воспроизведении результатов во многих областях науки, включая психологию и медицину. Эта проблема была названа «кризис репликации ». Метанаука выросла как реакция на кризис репликации и озабоченность по поводу расточительства в исследованиях.

Многие известные издатели заинтересованы в метаисследованиях и повышении качества своих публикаций. Ведущие журналы, такие как Science, The Lancet и Nature, постоянно освещают мета-исследования и проблемы с воспроизводимостью. В 2012 году PLOS ONE запустила Инициативу воспроизводимости. В 2015 году Biomed Central ввел контрольный список минимальных стандартов отчетности для четырех заголовков.

Первой международной конференцией в широкой области метаисследований была конференция Research Waste / EQUATOR, проведенная в Эдинбурге в 2015 году; первая международная конференция по экспертной оценке была проведена в 1989 году. В 2016 году была запущена программа Research Integrity and Peer Review. Вступительная редакционная статья журнала призвала к «исследованию, которое расширит наше понимание и предложит возможные решения проблем, связанных с рецензированием, составлением отчетов об исследованиях, а также этикой исследований и публикаций».

Области мета-исследований

Метанауку можно разделить на пять основных областей интересов: методы, отчетность, воспроизводимость, оценка и стимулы. Они соотносятся, соответственно, с тем, как проводить, сообщать, проверять, оценивать и вознаграждать исследования.

Методы

Метанаука стремится выявить неэффективные методы исследования, включая предубеждения в исследования, плохой дизайн исследования, злоупотребление статистикой, и поиск методов, позволяющих сократить эту практику. Мета-исследования выявили многочисленные предубеждения в научной литературе. Особо следует отметить широко распространенное неправильное использование p-значений и злоупотребление статистической значимостью.

Отчетность

Мета-исследования выявили неэффективные методы отчетности, объяснения, распространения и популяризации исследования, особенно в области социальных и медицинских наук. Плохая отчетность затрудняет точную интерпретацию результатов научных исследований, тиражирование исследований и выявление предубеждений и конфликтов интересов у авторов. Решения включают внедрение стандартов отчетности и большую прозрачность научных исследований (включая более строгие требования к раскрытию информации о конфликтах интересов). Предпринята попытка стандартизировать представление данных и методологию путем создания руководящих принципов такими агентствами, как CONSORT и более крупными EQUATOR Network.

Воспроизводимость

Кризис репликации продолжающийся методологический кризис, в ходе которого было обнаружено, что многие научные исследования трудно или невозможно воспроизвести. Хотя корни кризиса лежат в метаисследованиях середины и конца 1900-х годов, фраза «кризис репликации» не появлялась до начала 2010-х годов как часть растущего осознания проблемы. Кризис репликации особенно затрагивает психологию (особенно социальную психологию ) и медицину. Репликация является неотъемлемой частью научного процесса, и широко распространенный отказ репликации ставит под сомнение надежность затронутых полей.

Более того, репликация исследования (или отказ от репликации) считается менее влиятельной, чем исходное исследование, и менее вероятно, что он будет опубликован во многих областях. Это препятствует публикации и даже попыткам воспроизвести исследования.

Оценка

Метанаука стремится создать научную основу для экспертной оценки. Мета-исследование оценивает системы рецензирования, включая рецензирование до публикации, рецензирование после публикации и открытое рецензирование. Он также стремится разработать более совершенные критерии финансирования исследований.

Стимулы

Метанаука стремится продвигать лучшие исследования с помощью более совершенных систем стимулирования. Это включает изучение точности, эффективности, затрат и преимуществ различных подходов к ранжированию и оценке исследований и тех, кто их выполняет. Критики утверждают, что извращенные стимулы создали среду «опубликовай или исчезни» в академических кругах, которая способствует производству мусорной науки, низкокачественных исследований и ложные срабатывания. По словам Брайана Носека : «Проблема, с которой мы сталкиваемся, заключается в том, что система стимулирования почти полностью ориентирована на публикацию исследований, а не на их правильность». Сторонники реформы стремятся структурировать систему стимулов, чтобы способствовать более качественным результатам.

Реформы

Мета-исследования, выявляющие недостатки в научной практике, вдохновили на реформы в науке. Эти реформы направлены на решение и устранение проблем в научной практике, которые приводят к некачественным или неэффективным исследованиям.

Предварительная регистрация

Практика регистрации научного исследования до его проведения называется предварительной регистрацией. Он возник как средство решения проблемы кризиса репликации. Предварительная регистрация требует подачи зарегистрированного отчета, который затем принимается к публикации или отклоняется журналом на основе теоретического обоснования, экспериментального плана и предлагаемого статистического анализа. Предварительная регистрация исследований служит для предотвращения предвзятости публикации, уменьшения извлечения данных и повышения воспроизводимости.

Стандарты отчетности

Исследования, показывающие низкую согласованность и Качество отчетности продемонстрировало потребность в стандартах отчетности и руководящих принципах в науке, что привело к появлению организаций, которые разрабатывают такие стандарты, такие как CONSORT (Консолидированные стандарты отчетности по испытаниям) и EQUATOR Сеть.

EQUATOR (E n Повышение качества QUA и T прозрачности O для здоровья R esearch) Network — это международная инициатива, направленная на продвижение прозрачной и точной отчетности об исследованиях в области здравоохранения для повышения ценности и надежности медицинской литературы. Сеть EQUATOR была создана с целью повышения осведомленности о важности качественной отчетности об исследованиях, оказания помощи в разработке, распространении и реализации руководящих принципов отчетности для различных типов дизайнов исследований, мониторинга состояния качества отчетности об исследованиях в литературу по наукам о здоровье и проведение исследований, касающихся вопросов, влияющих на качество отчетности об исследованиях в области здравоохранения. Сеть действует как «зонтичная» организация, объединяющая разработчиков руководств по составлению отчетов, редакторов медицинских журналов и рецензентов, органы, финансирующие исследования, и другие ключевые заинтересованные стороны, имеющие взаимный интерес в улучшении качества исследовательских публикаций и самих исследований.

Приложения

Медицина

Клинические исследования в медицине часто бывают низкого качества, и многие исследования невозможно воспроизвести. По оценкам, 85% финансирования исследований тратится впустую. Кроме того, наличие предвзятости влияет на качество исследования. Фармацевтическая промышленность оказывает существенное влияние на разработку и проведение медицинских исследований. Конфликты интересов распространены среди авторов медицинской литературы и среди редакторов медицинских журналов. Хотя почти все медицинские журналы требуют, чтобы их авторы раскрывали информацию о конфликте интересов, редакторы не обязаны этого делать. Финансовые конфликты интересов были связаны с более высокими показателями положительных результатов исследований. В испытаниях антидепрессантов спонсорство фармацевтических препаратов является лучшим предиктором результатов испытаний.

Ослепление — еще один объект мета-исследований, поскольку ошибка, вызванная плохим ослеплением, является источником экспериментальной ошибки. В медицинской литературе мало информации о слепоте, и широко распространенное неправильное понимание предмета привело к плохой реализации ослепления в клинических испытаниях. Кроме того, неэффективность ослепления редко измеряется или сообщается. Исследования, показывающие неэффективность ослепления в исследованиях антидепрессантов, привели некоторых ученых к утверждению, что антидепрессанты не лучше плацебо. В свете мета-исследований, показывающих неэффективность ослепления, стандарты CONSORT рекомендуют, чтобы во всех клинических испытаниях оценивалось качество ослепления и сообщалось о нем.

Исследования показали, что систематические обзоры существующих научных доказательств являются недостаточными. -Оптимально используется при планировании нового исследования или подведении итогов. Кумулятивный метаанализ исследований, оценивающих эффективность медицинских вмешательств, показал, что многих клинических испытаний можно было бы избежать, если бы до проведения нового исследования был проведен систематический обзор имеющихся данных. Например, Lau et al. проанализировали 33 клинических испытания (с участием 36974 пациентов) по оценке эффективности внутривенного введения стрептокиназы при остром инфаркте миокарда. Их совокупный мета-анализ показал, что 25 из 33 испытаний можно было бы избежать, если бы систематический обзор проводился до проведения нового испытания. Другими словами, в рандомизации 34542 пациентов не было необходимости. Одно исследование проанализировало 1523 клинических испытания, включенных в 227 метаанализов, и пришло к выводу, что было процитировано «менее четверти соответствующих предыдущих исследований». Они также подтвердили ранее сделанные выводы о том, что в большинстве отчетов о клинических испытаниях нет систематических обзоров для обоснования исследований или обобщения результатов.

Многие методы лечения, используемые в современной медицине, оказались неэффективными или даже вредными. Исследование 2007 года, проведенное Джоном Иоаннидисом, показало, что медицинскому сообществу потребовалось в среднем десять лет, чтобы перестать ссылаться на популярные практики после того, как их эффективность была однозначно опровергнута.

Психология

Метанаука выявила значительные проблемы в психологическое исследование. Поле страдает от высокой систематической ошибки, низкой воспроизводимости и широко распространенного неправильного использования статистики. Кризис репликации влияет на психологию сильнее, чем на любую другую область; Две трети получивших широкую огласку результатов невозможно воспроизвести. Мета-исследования показывают, что 80-95% психологических исследований подтверждают их первоначальные гипотезы, что настоятельно подразумевает наличие предвзятости публикации.

Кризис репликации привел к возобновлению усилий по повторной проверке важных результатов. В ответ на опасения по поводу предвзятости публикаций и р-хакинга более 140 психологических журналов приняли экспертную оценку без учета результатов, в которой исследования предварительно зарегистрированные и опубликованные без учета их результатов. Анализ этих реформ показал, что 61 процент слепых исследований дает нулевые результаты, по сравнению с 5–20 процентами в более ранних исследованиях. Этот анализ показывает, что рецензирование, слепое к результатам, существенно снижает предвзятость публикации.

Психологи обычно путают статистическую значимость с практической важностью, с энтузиазмом сообщая о большой уверенности в неважных фактах. Некоторые психологи ответили более частым использованием статистики размера эффекта, вместо того, чтобы полагаться исключительно на значения p.

Physics

Ричард Фейнман отметил, что оценки физического константы были ближе к опубликованным значениям, чем можно было ожидать случайно. Считалось, что это результат систематической ошибки подтверждения : результаты, согласующиеся с существующей литературой, с большей вероятностью поверили и, следовательно, опубликовали. Теперь физики применяют ослепление для предотвращения такого рода предвзятости.

Связанные области

Журналистика

Журналистика, также известная как публикационная наука, — это научное исследование всех аспектов академический издательский процесс. Эта область направлена ​​на повышение качества научных исследований путем внедрения научно-обоснованных практик в академические публикации. Термин «журналистика» был придуман Стивеном Локком, бывшим главным редактором журнала BMJ. Первое, проведенное в 1989 г. в Чикаго, Иллинойс, считается поворотным моментом в становлении журналистики как отдельной области. Журналистика сыграла важную роль в продвижении исследования предварительной регистрации в науке, особенно в клинических испытаниях. Регистрация клинических испытаний теперь ожидается в большинстве стран.

Наукометрия

Наукометрия занимается измерением библиографических данных в научных публикациях. Основные исследовательские проблемы включают измерение влияния научных статей и академических журналов, понимание научных цитат и использование таких показателей в контексте политики и управления.

Научные данные

Научные наука о данных — это использование науки о данных для анализа исследовательских работ. Он включает как качественные, так и количественные методы. Исследования в области науки о данных включают обнаружение мошенничества и анализ сети цитирования.

См. Также

  • Эпистемология
  • Практика, основанная на фактах
  • Основанная на фактах медицина
  • Политика, основанная на фактах
  • Необходимы дальнейшие исследования
  • HARKing
  • Список исследовательских центров метанауки
  • Логология (наука)
  • Метатеория
  • Открытая наука
  • Политика в области науки и науки
  • Социология научного знания
  • Самоорганизованное распределение финансирования

Ссылки

Дополнительная литература

  • Лидия Денворт, «Существенная проблема: стандартные научные методы подвергаются критике. Что-нибудь изменится?», Scientific American, т. 321, нет. 4 (октябрь 2019 г.), стр. 62–67. «Использование значений p в течение почти столетия [с 1925 года] для определения статистической значимости экспериментальных результатов способствовало созданию иллюзии достоверности и [to] кризис воспроизводимости во многих научных областях. Растет решимость реформировать статистический анализ… Некоторые [исследователи] предлагают изменить статистические методы, в то время как другие отказались бы с порогом для определения «значимых» результатов ». (стр. 63.)
  • Харрис, Ричард (2017). Rigor Mortis: Как небрежная наука создает бесполезные лекарства, разрушает надежды и растрачивает миллиарды. Основные книги. ISBN 9780465097913.

Внешние ссылки

Журналы

  • Minerva: журнал науки, обучения и политики
  • Целостность исследований и экспертная оценка
  • Политика исследований
  • Наука и публичная политика

Конференции

  • Симпозиум Metascience2019 в Стэнфорде

Понравилась статья? Поделить с друзьями:
  • Метаирония как пишется
  • Место жительства неизвестно как пишется
  • Метается или мечется как правильно пишется
  • Место жительства не имеет как пишется
  • Метаданные как пишется