Detailed Instructions: In this task, you are given a set of paper reviews in English and Spanish language. Based on given reviews, your job is to generate decision, i.e., "accept" or "reject" for the given paper. Note that URLs in the text have been replaced with [Link].
Q: Review_1: The paper describes an experience concerning the automated inspection of spectra for the Pipeline Hubble Legacy Archive Grism data.  Comments:  In the conclusions the authors say that "We have identified two classes of flawed spectra which were not picked up by the automatic classification because of their very small number of training samples. Per construction machine learning techniques can not classify such outliers." This sentence should be further explained. As Hastie, Tibshirani, and Friedman say in The Elements of Statistical Learning (see Chapter 7): "it is too difficult to give a general rule on how much training data is enough; among other things, this depends on the signal-to-noise ratio of the underlying function, and the complexity of the models being fit to the data. " So, by just saying that the number of training samples is small you do not provide enough information to decide whether machine learning techniques are adequate or not to solve a problem.  Other minor comments:  + If reference [2] has not yet been written or published it should be deleted from the paper.  The dataset may be of scientific importance.  It is mainly descriptive. Review_2: This manuscript addresses an interesting solution based in machine learning techniques to classify spectra legacy data of the Hubble Space Telescope in order to publish the results and "good" spectra in Internet to scientific community. The manuscript is well-written and results are robust. Experiment is sound and the manuscript seems acceptable in the current form. The work can be short, but very interesting to the Infonor and JCC community.  The main problem of the paper is that it is difficult to compare results for the classifiers utilized. A confusion matrix and parameters for each classifier could clarify results. Review_3: Interesting application domain.  Nothing new from a machine learning perspective. They authors should provide more information about the models they have obtained with the different classifiers (feature subset selection used, selected features, accuracies, statistical significance on the differences on accuracies, the models themselves, ...). Semi-supervised classification is the appropriate method for the last paragraph in the "Methodology" section. 
A:
accept