At the end of 2014 December, a review result for a submitted manuscript came back. It started with "We are pleased to inform you that...". Good news. It's going to be published!
The work took much efforts for the write-up and for two revisions. In the review process, the reviewers made good constructive criticisms. I see such criticisms as "friendly" ones, and am appreciative for these anonymous reviewers. Now I am very happy to see the manuscript going to be published.
The category for this entry is "Science". It's been a while to write for this category.
******************
Some time ago I wrote about how to access/read recent scientific works. For the works published in biomedical science field, it can be done through pubmed and other database.
"Science: How can you know the latest science" (6/22/14)
However, the entry did not address a question of assessing the quality of the works. How to distinguish whether the published paper is something highly regarded among the experts, or no other scientists acknowledge, was not mentioned.
And it is important to know which work is considered more "trustworthy". After all, you can find all kind of "evidence" in the net, and assessment of quality is not accompanying them. But if a grad school student cites too many obscure works and misses many classic/landmark works in the field in his thesis, I doubt he can get the degree.
Although current system is not perfect, it would be meaningful to tell how scientists evaluate other scientists' works.
"How do you measure your science?" has been a long standing question. It is not really easy to compare works from different scientific specialties, like immunology and molecular genetics. Evaluation for scientific work is not necessarily a popularity contest. Yet, we scientists currently resort to a form of popularity contest among experts.
First of all, there are many scientific specialties, and each specialty research field has a limited number of experts. Evaluation is done by a peer review system. That is, experts of a field review works by other experts in the same field. The review is usually done anonymously.
For example, I have no idea who reviewed my manuscript, and I can only assume someone in the same field or close specialty did. Probably I've read his/her paper before, or perhaps they are among the potential reviewers I specified when I sent the manuscript to the journal.
To maintain integrity of this peer review system, we have to know who are experts in the same field. Reversely, when I am asked to review a work somewhat outside of my core expertise (that can happen), I would have to excuse myself from reviewing it.
The overall structure of science is like a collection of highly divided specialties. It's not democracy, but government of the experts, by the experts, for the experts.
Searching for a metric to be applied over the many specialties, we started using "Impact Factor (IF)". IF is a number how many times a scientific specialty journal's article is cited by other researchers. If an article in a journal is cited 5 times on average, the IF is 5.0 for the journal. Over a time, the IF became a standard to measure the journal's ranking. The IF also aided forming a hierarchy among the journals.
There are some prestigious journals (for example, general commercial journals "Cell", "Nature", and "Science"). They boast 30-40 IF. They come on top of the hierarchy of journals. Getting 8-10 IF and the journal makes top tier in a specialty field ("Cancer Research" or "Oncogene" in cancer research field for example). So if you want to have a general idea about how the published work is regarded among experts, check the journal and the IF.
A journal's IF is an average for all articles in the journal. Among published articles, there will be some hits (e.g. 20 citations for an article in IF 5 journal) and duds (e.g. 1 citation for an article in IF 5 journal). It will be better if you check the citation of the particular paper you are interested in.
Note that citing a work cannot be done immediately. It takes 6 months-3 years (or longer) for the work be well-known, acknowledged and cited appropriately by others. Even if I read a new paper today and think it is a good work, for the work to be "cited", I need to publish my own work and cite the work in the text of my work. And it takes time to do so.
That said, there are more "cite-able" works and less "cite-able" works. I am sure that some metrics-savvy researchers are pursuing more "cite-able" works.
This is how "experts" (tend to) estimate the impact of the works in their field. In recent years, there are some websites that compile IF and calculate a researcher's overall "IF". Of course, there are many inside factors and built-up reputation that play a role in evaluation. These factors are not always friendly to metrics.
Another important metrics for a researcher is the funding situation of the researcher. Not all science projects are funded equally, even if the quality is comparable. Funding is allocated considering needs of the public or funding body. General population will appreciate "big" breakthrough-looking advancements, like Ebola medicine, stem cell-based regenerative therapy, easing PTSD, cause identification of autism, or cure for difficult cancers. A part of research fund certainly goes to these "hot" projects. But not all researchers jump in hot projects. It doesn't work that way.
One thing about scientific expertise is that it takes time to develop. Knowing a subject inside and out, what worked and what did not (that may not be published thus may not be learned from literature), requires substantial time, efforts and adherence to a specialty. The cultivated and broad-based infrastructure and human resource are priceless.
Business-oriented view for science can overlook the importance of variety. Business tends to focus on something successful here and now. A broad based science yields variety that allows cross pollination of specialties. Historically, many Nobel-prize winning breakthrough came from somewhat useless-looking or not-the-hottest- researches. A research specialty does not appear from nowhere, but has some roots, and usually has been maintained in a linage. Although it is true that national projects made nuclear bomb and sent people to the moon, I would like the public to be aware of the importance of broad-based science.
Ebola research budget was cut for a while, because someone thought the research lacked urgency and was a waste of money. Then the Ebola scare in 2014 happened. Was the budget cut short-sighted or what.
Science has an aspect of an art. Metrics are good, but they can indicate only limited aspect. Metrics for scientific works are different from the numbers of facebook likes, website visits or of viral video view counts. "Who said" factor still counts heavily in science.