Apr 28, 2022

Public workspacePanicle Ratio Network: A high-throughput dynamic phenotype recognition model based on ultra-high-definition unmanned aerial vehicle images for rice panicle analysis in fields

  • Ziyue Guo1,
  • Chenghai Yang2,
  • Wangnen Yang1,
  • Guoxing Chen1,
  • Zhao Jiang1,
  • Botao Wang1,
  • Jian Zhang1
  • 1Huazhong Agricultural University;
  • 2USDA-Agricultural Research Service
Icon indicating open access to content
QR code linking to this content
Protocol CitationZiyue Guo, Chenghai Yang, Wangnen Yang, Guoxing Chen, Zhao Jiang, Botao Wang, Jian Zhang 2022. Panicle Ratio Network: A high-throughput dynamic phenotype recognition model based on ultra-high-definition unmanned aerial vehicle images for rice panicle analysis in fields. protocols.io https://dx.doi.org/10.17504/protocols.io.bp2l6176rvqe/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it’s working.
Created: April 27, 2022
Last Modified: April 28, 2022
Protocol Integer ID: 61524
Disclaimer
DISCLAIMER – FOR INFORMATIONAL PURPOSES ONLY; USE AT YOUR OWN RISK

The protocol content here is for informational purposes only and does not constitute legal, medical, clinical, or safety advice, or otherwise; content added to protocols.io is not peer reviewed and may not have undergone a formal approval of any kind. Information presented in this protocol should not substitute for independent professional judgment, advice, diagnosis, or treatment. Any action you take or refrain from taking using or relying upon the information presented here is strictly at your own risk. You agree that neither the Company nor any of the authors, contributors, administrators, or anyone else associated with protocols.io, can be held responsible for your use of the information contained in or linked to this protocol or any of our Sites/Apps and Services.
Abstract
In order to increase the convenience and efficiency of rice cultivation and breeding, a PR evaluation model based on deep learning and UAV–RGB images was established in this study by focusing on the ETP and HD, which affect rice yield and reflect rice growth. Based on an image regression method that reduces the effects of cross-overlapping due to an excessive density of targets, PRNet used a combination of DenseNet and SPP to extract the features of growing panicles during the rice heading stage.The training results showed that the model performed well with a data set collected in 2019, and the estimation results obtained using the test data set demonstrated that the model had good robustness.Images containing a whole rice plot can be used as direct inputs for PRNet to obtain PR, and then ETP and HD can be determined with high accuracy according to the PR curve.After testing images with different resolutions in the model, the acceptable resolution range for the model was determined to be between 0.6 mm and 2.4 mm when the RMSE for the estimated results was less than 15%. Thus, data collection schemes were presented to facilitate the use of PRNet.However, different types of cameras have different image quality settings, and the model's estimation accuracy might not be ideal even if the resolution is acceptable.The accuracy of the estimations obtained by the model were higher for rice plots in the center of images captured around noon compared with those at other times or at the edges of images.Based on a high-throughput and efficient UAV–ultra-high-definition imaging platform, PRNet is suitable for nondestructive PR monitoring in more than 2,000 plots within 10 min.Furthermore, this scheme can be extended to other panicle-related crop phenotypic analyses and provide a reference for extracting indicators determined by several traits, thereby accelerating the development of in situ field crop phenotypic information extraction.
Attachments
Guidelines
plese just follow the steps.
Materials
A camera that can take high-resolution images.
Safety warnings
There is no particularly dangerous operation in this experiment.
Before start
To use our trained model, make sure the image resolution is between 0.6 and 2.4 mm.
High-resolution images of rice at heading stage were obtained by camera equipment.
6w 3d
After image collection, some plots were selected to investigate their panicle number and tillering number.
6w 3d
The plot to be observed was cropped from the original image, and the corresponding plot image was annotated according to the ground survey results.The data that support the findings of this study are openly available in [Figshare] at https://doi.org/10.6084/m9.figshare.17169266.v1.
3w
A deep CNN (DCNN) for automatically estimating PR was built based on the Keras library with the TensorFlow backend.
1w
The data set is divided into training set, validation set and test set.Model parameters are optimized using training sets and validation sets.
2h
Training the heading proportion recognition model.All the code can be viewed at GitHub at https://github.com/Ziyue-Guo/Panicle_Ratio_Network.git.
4h