PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12348 including the Title Page, Copyright information, Table of Contents, and Conference Committee Page.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Artificial Intelligence Modeling and Neural Network Applications
At present, the production volume of short video is huge, which enriches the total amount of short video resources, but it also brings considerable difficulties to the subsequent management audit and distribution push. This paper combed the short video content analysis technology difficulties, to respectively introduce the virtualization technology and intelligent analysis related algorithm, using virtualization three objects and CNN algorithm in the background of virtualization technology the intelligent technology, which is widely and in-depth application in short video content analysis, and points out the later algorithm optimization direction, to provide technical support and driving force for industry development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper establishes a multi-objective integer programming model considering economic and safety aspects. The time series model predicts the loss of goods by forwarders for the forwarding decision, and dynamic planning is used to solve the forwarding solution with the lowest loss of goods. Firstly, the data are preprocessed by ADF test and quadratic isolated forest algorithm, raw material preference weights are set according to Friedman test and transaction mean, and decision tree model classifies suppliers, and finally, 50 most important suppliers are selected by ranking through structural entropy weight -Topsis comprehensive evaluation method; based on the marginal decreasing effect, a strategic-large supplier interaction function is established, and the minimum number of suppliers is solved to be 35, and the corresponding number of A, B and C suppliers are selected according to the ranking and quota allocation. Then, the minimum number of suppliers to meet the production demand of the enterprise is determined by the ordering scheme decision model based on multi-objective integer planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conditional Generative Adversarial Networks (CGAN) is an architecture-variant Generative Adversarial Network (GAN) that inputs conditional data into generator and discriminator simultaneously. CGAN has been applied in a wide range of fields in recent years. One of the most famous usages is its application in computer vision, which enables to perform images and videos transformation with specified conditions. In this paper, we first introduce the structure of GAN, which supports every CGAN system, and the basic CGAN that has been demonstrated in computer vision. Then, we review different implementations of CGAN in two major computer vision domains, i.e., the image processing domain and the videos processing domain. Besides, we review three representative research orientations in each domain, which all implement CGAN or variants of CGAN. Moreover, there is a comparison or declaration of several outstanding approaches in each direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Stock price movement prediction is faced with the problem that the distribution of certain underlying variables change over time. This phenomenon is defined as concept drift. Due to this phenomenon, stock price prediction models tend to give less accurate results, since the data distribution that the model has been trained on is no longer in-line with the current data distribution. In this paper an Adversarial Attentive Long Short-Term Memory (Adv-ALSTM) model is used together with a Hoeffding’s inequality based Drift Detection Method with moving Average-test (HDDMA) concept drift detector in order to make price movement predictions on 50 different stocks. Every time the HDDMA concept drift detector detects a concept drift, the model undergoes one of four possible retraining methods. The conducted experiments highlight the effectiveness of each of the proposed retraining methods, as well as how each of the methods mitigate the negative effects of concept drift in different ways. The best observed results were a 2.5% increase in accuracy and a 135.38% increase in Matthews Correlation Coefficient (MCC) when compared to the vanilla Adv-ALSTM model. These results validate the effectiveness of the proposed retraining methods, when applied to a model that has been trained on a financial dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic recognition of rice leaf diseases has always been a research hotspot in the smart agriculture community, which can provide key information for pest control and scientific production decision-making. Benefiting from the rapid development of the convolutional neural networks, the accuracy has made a breakthrough, while still limited when the lesion area is quite small in the early stage. To this end, in this paper, we propose an attention-based feature enhancement strategy for rice leaf diseases recognition. specially, we first capture the discriminative features through learning several independent spatial attention maps, which are highly responsive to categories. We further fuse these local discriminative features based on the contribution scores of different features to the category. With extensive experiments on benchmark data set RiceLeafs, we achieve a training accuracy of 99.68% and a testing accuracy of 87.60%, which verifies the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The heating process of the blast furnace is a complex and huge controlled object, which has the characteristics of nonlinear, multivariable, distributed parameters, fast and slow processes intertwined. It is impossible to achieve satisfactory control results only by traditional control. With the development of artificial intelligence in recent years, neural networks, expert technology, fuzzy and predictive control provide new ideas for the heating control of the blast furnace. In this paper, based on the deep learning framework of artificial intelligence, the LSTM algorithm is proposed. Based on the massive production data, through the model simulation comparison, the results show that the LSTM network modeling has stronger generalization ability, smaller error, and higher accuracy. The LSTM neural network algorithm can effectively deal with the characteristics of time series data, provide guidance for the production practice, and lay a good foundation for the predictive control of blast furnaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, driving fatigue for a long time has been a very dangerous behavior. People under the scenario of the need to stay awake for a long time, can grow over time and cumulative fatigue, drivers, in particular, this kind of distracting behavior is more dangerous, even threatened the life safety of the driver, the car have fatigue detection system has played a crucial role. In order to effectively remind drivers to avoid fatigue driving and reduce the number of car accidents, the system, based on pycharm environment, plans to use MTCNN face detection algorithm and ERT cascade regression algorithm to locate 68 key feature points of the face by calling the camera, and collect the features of previous faces[1]. Focus on monitoring the characteristic areas of eyes, nose and mouth, set a good threshold to judge the degree of fatigue state, using Open CV+Dlib these image processing library design logic algorithm to achieve the system program, through the face blink, yawn and nod times and relative to the threshold. This paper focuses on the design and implementation of fatigue detection system based on face facial feature recognition. The system uses Python+ Open CV to realize the simulation program and test, and uses the face detector of Dlib library to load the 68-point face facial feature recognition model. It can perform state analysis on behaviors such as yawning, eye closing, nodding and nodding, so as to judge whether the driver is tired or not, and use Flask data visualization and analysis[2].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generative adversarial networks (GANs) are one of the most popular innovations in machine learning. GANs provide a way to learn deep representations without extensively annotated training data. They achieve this by deriving backpropagation signals through a process involving a pair of networks. GANs are generative models since they are able to create data instances that resemble the training data. Besides, GANs provide a way to learn deep representations without annotated training data. They achieve this by deriving backpropagation signals through a looping process involving multiple networks. Those representations learned by GANs can be used in various fields such as Image Generation, Abnormal Detection, Video Repair, using GAN for Infrared to RGB, Image Inpainting, etc. This review paper provides a clear overview of GANs and their application into the Image Inpainting process. Furthermore, it points out both the advantages and disadvantages of GANs in the machine learning field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid development of artificial intelligence has given rise to a series of algorithmic problems that are thought-provoking, and algorithmic discrimination is a more prominent problem in the development of artificial intelligence algorithms. The main manifestations of algorithmic discrimination include feature-selective discrimination, associative discrimination, and big data-enabled price discrimination. The causes of algorithmic discrimination mainly include original bias, learning bias and external bias. The regulation of algorithmic discrimination should follow the principles of visualization and transparency, and at the same time, in the improvement of algorithmic technology, it is necessary to ensure the accuracy of algorithm operation while focusing on the inherent fairness of the algorithm. All in all, the regulation of algorithmic discrimination is a synergistic situation formed by the joint participation of multiple parties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive landscape emphasizes the interaction between people and landscape elements, so compared with traditional landscape, it can bring richer and more interesting landscape experience to people and has attracted more and more attention. In landscape design, in order to test and judge the interactive experience effect of interactive landscape design schemes, the application of virtual reality technology has become the research focus of the industry, and how to effectively apply it has always been the focus of designers. Therefore, based on immersive VR technology and its basic application requirements, this paper discusses how to effectively apply VR technology in interactive landscape design from many aspects, so as to give full play to its advantages and provide reference for related design work in the futuret.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive software of digital museum has gradually become the mainstream of the mobile terminal market. It faces a huge user group, and the research field of gamification user experience of mobile museums has been widely recognized and applied. It integrates game thinking and game mechanism and analyzes feasible method of mobile figure for interactive display in terminal digital museums. Gamification interaction design promotes mobile virtual museums in multiple dimensions such as navigation, experience, technology, etc., and some reference conclusions are obtained through analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Monocular visual-inertial system with high accuracy and robustness is a popular choice for vehicle localization. The system incorporates data of camera and IMU to obtain accurate poses through state estimation of the back-end module. However, degenerate motions of vehicle usually make states unobservable, which then increases the errors of state estimation. In this paper, we propose a multi-model state estimation method to avoid this situation. Three kinds of modes are identified according to the observability of the system in various motions. We weight the error of scale and gyroscope bias to select mode in mode judgment module. Additionally, we combine PnP and the integration of IMU data to accurately obtain the new frame’s state. To achieve the optimal estimation, we abandon to optimize unobservable states and keep them in initial values. Our method is verified on real-world experiments and gets better performance than traditional state estimate methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Musician influence evaluation, music similarity measurement, and genre classification are practical research fields in music recommendation and analysis. Based on the follower data of 5,854 artists in the past 90 years and the data of 16 music indicators in three categories such as dance and rhythm of 98,340 songs, a directed network of musicians' influence is constructed to measure the influence of musicians. Based on the follower data of 5,854 artists in the past 90 years and the data of 16 music indicators in three categories such as dance and rhythm of 98,340 songs, a directed network of musicians' influence is constructed to measure the influence of musicians.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When the fuzzy factors is in the sample or noise around the classification surface which is the decisive factor for the summary of support vector machines (SVM), the operation results of SVM will be greatly affected. In this case, the noise must be effectively removed to minimize the impact on the sample. Here, the method of “one class to the rest class” is used to establish fault classifier, and a decision algorithm combining binary tree with fuzzy support vector machine is proposed. In this algorithm, the noise points near the classification surface in the samples are removed by SDWFCM (Spacial distance weighted fuzzy C-Means) method firstly. Then, based on the fuzzy feature representation method, the fuzzy support vector machine is improved by combining the fuzzy membership degree determination method and the basis of the threshold determination after fuzzy programming. The support vector machine can effectively process the composite samples of mixed fuzzy samples, in order to accomplish the fault diagnosis of a certain type of equipment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the problem of low recognition rate of single network due to insufficient feature extraction in classroom behavior recognition, a dual network (VGG16 and ResNet50) model architecture (Student Recognition Net, SRN) is proposed. First, collect data and preprocess it to construct a student behavior dataset. Secondly, use Yolov3 to obtain the coordinate information of the students to facilitate subsequent adjustments to the data set. Finally, the output of the dual network is weighted and retrained to increase the extraction of feature information. Experiments show that the average behavior recognition accuracy of this model on the SBR-9 data set reaches 92.3%, and the accuracy of some behavior recognition is up to 98.2%, which is better than the behavior recognition effect of a single network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Handwritten character recognition is widely used in mail collection and classification, note sorting and filing, financial data identification and analysis, etc. Current machine learning methods for handwritten character recognition mainly include Random Forest (RF) and Support Vector Machine (SVM). However, the performance of these methods compromises a lot due to high noise, non-linearity, and unobvious feature values in handwritten character images. This paper presents a stable and adaptable Convolutional Neural Network (CNN) to recognize handwritten characters. First, this paper preprocesses the images by noise reduction, one-hot encoding, and contrast enhancement to increase the recognizability. Then, this paper combines the structures of several CNN classic models to construct a CNN model with strong adaptability and high stability. This CNN model loops through a combination of a convolutional layer, a pooling layer, and a Dropout layer, which realizes the full connection of local features and reduces the amounts of parameters and data complexity. The Dense layers map the flattened image data to the output space by extracting the correlation between the features. Each layer in this CNN combines with Relu-based activation functions to perform nonlinear mapping, alleviating problems such as gradient disappearance and overfitting. The proposed CNN is compared with RF and SVM on the English Handwritten Characters dataset with accuracy and confusion matrix. The recognition accuracies of RF, SVM and CNN are 0.90, 0.92 and 0.96, respectively. RF and SVM confuse multiple characters, while CNN only generates a small number of confusion on the characters G and S. In the test with actual handwritten characters as samples, this CNN also shows higher recognition accuracy and fewer confused characters than RF and SVM. In addition, this paper compares and evaluates this CNN with other representative CNN models. The experimental result shows that the proposed CNN has a greater advantage in the balance of fitting speed and recognition accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Static Gesture Recognition is an interactive system that enables people, especially the hearing-impaired, to directly communicate with other people and machines without third-party auxiliary equipment. However, it is challenging due to gesture datasets and the recognition accuracy of pre-processing. This paper proposes Convolutional Neural Network (CNN) with hand landmark detection to recognize Static Gestures and improve the pre-processing accuracy significantly. First, this paper makes a dataset by taking 5,800 photos from 4 people, including 13 one-handed gestures and 3 two-handed. Then, it pre-processes the dataset in both ways of skin color detection and hand landmark detection to select a high-accuracy method. Finally, CNN is trained for both pre-processed datasets to classify the samples into 16 classes. To verify the method’s effectiveness, this paper also compares a baseline model that comprises the Support Vector Machine and the Principle Component Analysis. The experimental result shows that CNN eclipses the other model with higher accuracy of 99% for Static Gesture Recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time Sensitive Network (TSN) is a new generation of deterministic transmission technology, which adds determinism and reliability to traditional Ethernet. To study and realize the centralized scheduling management of time-sensitive network switches, this paper proposes a time-sensitive network centralized scheduling management system. The system includes three functions: topology discovery, TSN YANG model configuration, and traffic scheduling, and achieves automatic configuration management. Purpose of Time Sensitive Networking. Finally, an experimental verification platform is built by using the TSN switch developed in the laboratory, which verifies the feasibility of the scheme proposed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of multimedia technology, Fine-Grained Visual Classification (FGVC) has gradually become one of the new hot tasks in computer vision community, whose goal is to identify images that belong to the same species. Though the accuracy of FGVC has made a great breakthrough, the performance is still limited by the issue of locating objects’ discriminative regions, as common state-of-the-art convolutional neural networks that perform excellently in image classification task such as ImageNet-1k cannot be directly applied to FGVC tasks. In this case, we provide a comprehensive and systematic survey of recent advances in FGVC field and divide the existing methods into: creative application of attentive structures, aids of diverse pretraining methods, various designs of loss functions and other innovative methods. We further analyze the performance of representative methods on common data sets, and finally summarize the existing problems in the FGVC research field and predict the solutions to these problems in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current analytical methods based on finite volume simulations are inefficient because convection coefficients are often unknown. In this work, a loss function is defined based on the mean error satisfying the PDE, initial conditions, and BC. Furthermore, heat transfer theory is used for feature engineering. The predictions for the one- and two-dimensional cases are verified by comparison with the FE results. The results show that, with engineered features, heat transfer outside the training region can be precisely predicted. Trained models with fixed initial conditions can quickly evaluate a range of BCs and develop feedback loops to implement the Industry 4.0 concept of active manufacturing control based on sensor data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are so many characteristics of the international multimodal transportation of goods, such as, the high cost, the limitation of time and space, the difficulty of repeated operation, which makes it difficult for students to conduct field operations on the logistics process of CHINA RAILWAY Express, which involves so many countries. Through virtual simulation, the whole process of logistics operation of “Belt and Road” CHINA RAILWAY Express can be presented. From the perspective of multimodal transport operator (or multimodal transport carrier), students who can participate in the experiment and experience the whole process of international multimodal transport in the flesh. It can overcome some problems of the traditional class. Specifically, students not only lack learning interest and there is a lack of innovation in teaching methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Alpine meadow in Sanjiangyuan area plays an important role in the ecological situation of China. However, in recent years, the grassland vegetation in this area has been seriously degraded due to the impact of climate and human activities. Reasonable prediction of grassland degradation is a prerequisite to protect the ecological environment of the area, but the traditional manual measurement method is time-consuming and inefficient. Specifically, in order to solve the problem of data shortage, we first spent 2-3 months on-the-spot measurement to obtain more than 150,000 pieces of data for supervised learning, and divided them into training sets and test sets. In the training phase, the DNN model and K-mean algorithm are used to preprocess the data in parallel, and then the trained model is used to predict, which greatly shortens the test time. Finally, several experiments are carried out in the test set to verify the validity of the model. The average prediction accuracy is more than 99%, and AUC index is more than 95%. It is indirectly proved that the model can be applied to the prediction of grassland vegetation degradation in Sanjiangyuan area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advent of the era of big data, it takes a lot of time and manpower to build a model that can automatically mine the effective features of data and get the user click-through rate prediction model through training, because the single model classifier needs to extract effective features and input them into the model training by means of feature processing and data mining. In view of the shortcomings of website click-through rate prediction technology, this paper proposes a model based on compressed incentive network to extract the influence of a single factor in the overall project characteristics. The paper also applies the neuron mechanism to extract the effective features of the original features, fuses the features proposed by the two models into new features and adds them to the deep neural network for training. Experiments are designed to prove the rationality of the method. The experimental results show that compared with the current commonly used model, it can improve the AUC index and keep the efficiency within a reasonable range. The research results of this paper are not only of great significance to the development of science and technology, but also affect our daily life and economic consumption all the time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Software-based definition of network and completion of network virtualization can be used to carry out network arrangement. Network arrangement can provide faster and more efficient deployment of differentiated services. With the rapid development of artificial intelligence technology, the combination of network arrangement system and artificial intelligence is an important means to promote the further development of virtual network functions. In this research process, it is necessary to analyze the specific situation of network function virtualization, and understand the connotation, architecture and challenges of network function virtualization. At the same time, it is necessary to deeply analyze the architecture of virtual network arrangement system based on artificial intelligence, so as to explore the dynamic arrangement algorithm of artificial intelligence virtualized network functions and realize the automation and intelligent transformation. In the specific application process, it is necessary to understand the constraints of dynamic arrangement of virtualized network functions, and at the same time, it is necessary to complete the dynamic arrangement of virtualized network function design based on artificial intelligence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtual restoration of ancient buildings has been paid more and more attention in the field of protecting and studying ancient architectural cultural heritage. Using various virtual restoration related technologies to establish a realistic virtual environment is a current research hotspot. It complements the modeling and rendering of three-dimensional geometric model based on real scene measurement data through the splicing and fusion of photos and three-dimensional reconstruction process, and partially solves the difficulty of measuring damaged or even non-existent ancient buildings, it is an efficient method to realize the virtual restoration of ancient buildings. Among them, image restoration is also a very important thing, which has great practical significance. Image restoration technology is a hot research direction at present. Using the advantages of convolutional neural network model in image restoration, two hot issues of image deblurring and image super-resolution reconstruction involved in image restoration are studied. Virtual restoration technology has been well developed in science and technology, medical treatment, military and other fields, and at the same time, it has promoted the construction of virtual ancient architecture system. The virtual restoration of ancient architectural cultural heritage is the product of the combination of science and art. The use of virtual restoration technology can digitize the ancient architectural cultural heritage. The virtual ancient architectural system breaks the restrictions of time and space, improves the exhibition effect and exhibition efficiency of ancient architectural cultural heritage, and plays a certain supporting role in the application of protecting cultural heritage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electrocardiogram (ECG) is the most commonly used diagnostic method for heart disease. However, in the process of ECG signal acquisition, it is often interfered by noise, which greatly affects the accuracy of ECG signal classification and diagnosis. To improve the preciseness of ECG classification and diagnosis, we propose a method using continuous wavelet transform (CWT) combined with a deep convolutional network GoogLeNet. We used three sets of ECG data from the PhysioNet database for cardiac arrhythmia (ARR), congestive heart failure (CHF), and normal sinus rhythms (NSR). First, we construct time-frequency representations of ECG signals using CWT, which are known as scale maps. Second, we improve GoogLeNet to better recognize ECG images, which has an accuracy of 96.87%. Finally, this paper visualizes the network filter weights from the first convolutional layer and finds the most relevant channel for the original input, and compares the most relevant channel with the original input to analyze the interpretability of deep neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Finding an automated fault identification method for transmission lines has always been a hot research issue in the field of power applications. With the rapid development of automated data acquisition equipment such as UAV and artificial neural network technology, more and more researchers begin to pay more attention on the issue of Transmission Lines Fault Recognition based on the Line Patrol UAV. In this paper, based on an in-depth literature survey, we provide a representative description and analysis for UAV powerline detection from two aspects. According to the difference of data acquisition strategies (radar images, infrared images, ultraviolet images, visible images), we first discuss the advantages and disadvantages of each data source based on the introducing the representative methods which is designed to deal with these data sources. We also introduce the processing ideas of efficient information processing algorithms of UAV powerline fault detection. Finally, we summary the main challenges in the field of transmission lines fault recognition based on line patrol UAV and prospects its possible future development direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, through in-depth study of highway traffic and non-highway arrival time prediction model and algorithm, it is found that the performance of DA-RNN (recurrent neural network based on two-stage attention) is better than other classical prediction models. We choose to introduce an attention mechanism to adaptively select the most relevant factors from heterogeneous information and establish a prediction network based on da-rnn (recurrent neural network based on two-stage attention). In this paper, the real data of Jinan bus driving is used to carry out experiments, and the bus travel prediction is realized. Under the same data condition, a variety of methods are tested. Through the comparative analysis of experimental results, the method proposed in this paper performs better on the data set provided by Jinan bus company.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, artificial intelligence technology has been applied in many fields, and China is also accelerating the pace of artificial intelligence development in the sports industry. The application of artificial intelligence in the sports industry has many aspects, including the monitoring of the physical condition of athletes, the analysis of sports data, the real-time analysis of the performance of sports events, and many intelligent equipment in sports venues. The changes brought about by the industry have enhanced the precision of sports, maximized the physical function of athletes, and improved the splendor of sports. This paper makes a detailed analysis of the intelligence of sporting goods, the digitization of sports events and the intelligence of stadiums in China's sports industry, and uses a factor analysis model to rank the competitive sports capabilities of several provinces in China. Further analysis finds that Influencing factors of urban competitive sports ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Handwritten character recognition can be applied in education, finance, social good, and many other fields. Varieties of kernelized methods are applied to this field. However, the fact is that many relative kernelized algorithms’ performances depend greatly on the kernels, and the number of current choices is quite limited. This paper uses Random Forest to generate a kernel and constructs a multiclass SVM classifier based on the kernel for handwritten character recognition. The proposed method can generate a targeted kernel by analyzing the dataset, which improves the prediction accuracy of the final model. At first, all the images are compressed by grayscale transformation, binary conversion, and redundancy elimination, and their sizes are uniform to 𝟑𝟐× 𝟑𝟐. Then, the data selected for training and validation is used to build a Random Forest model responsible for generating the Random Forest kernel. Finally, based on the kernel constructed before, multiclass SVM is used to classify the dataset into 36 kinds of handwritten characters. The proposed method is compared with a Linear-kernel SVM model, a Polynomial-kernel SVM model, and a Random Forest model to evaluate the model's efficiency. The experiment results demonstrate that SVM with Random Forest kernel can achieve relatively high accuracy while recognizing handwritten characters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data-to-text generation has always been a research hotspot in multimodal machine learning. Given the structured data such as a list of characteristics of a person or statistical data from an exam, the data-to-text generation aims to automatically generate smooth, true and accurate text which can describe the input data. Thanks to the rapid development of convolutional neural networks, the text generation models based on deep learning have made great breakthroughs in performance, but still face many challenges which mainly include confusion of the output location of different information, generation of non-existent content, duplication of information and data sparseness. In this paper, based on in-depth literature survey, we introduce the representative algorithms from the aspects of improving text evaluation quality, combining deep learning and knowledge, adapting to various expressions and aligning text and structured data. We also summarize the existing problems and prospect the possible future development direction in the data-to-text generation research field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural networks have achieved great success in many areas in recent years, but neural networks are vulnerable to adversarial examples: given a small change on data x, the label of x changes. The method of using adversarial examples as training data to train neural networks is called adversarial training. It can improve the robustness of neural networks. In fact, most of the information of data is useless for predicting. For humans, we can only use a small part of an image to classify it correctly, the background of an image is not very useful for predicting. In this paper, we propose a novel method to produce adversarial examples. We use contrastive learning to get the foreground of the image, then we only change the foreground of the image. We have conducted detailed experiments on two datasets (MNIST, CIFAR-10) and the experimental results show the effectiveness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When customers purchase a luxury, they need to compare the price of different goods. Due to the particular characteristics of the luxury markets, there are some phenomenon of premium. In our paper, we focus on the price of luxury bags forecasting from some important features, such as brands, skin type, number of components, volume and production. The prediction problem could be seen as creating a regression model. In this paper, the dataset is from the internet by web crawlers. We have 27 raw features, such as brand, bag name, etc. Then we do the preprocessing to divide these features into three parts: continuous features, ordered categorical features and unordered categorical features. We hybrid the MLP and AdaBoost model, and compare our hybrid model with other models to evaluate model’s performance. The metrics is RMSE. The RMSE score of our model is 1627.94 which is 1263.55, 1369.62, 1199.18, 520.18 lower than Linear Regression, SVR, MLP and AdaBoost respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Video can be regarded as a sequence composed of several related images. To predict video, we should not only grasp the characteristics of a single image, but also combine the temporal logic information between images. The Predictive Recurrent Neural Network (PredRNN) is a video frame prediction network using spatiotemporal memory stream structure in spatiotemporal LSTM network. This paper introduces the improved method of PredRNN based on feature fusion. The spatiotemporal memory flow structure of PredRNN will bring the problem of gradient disappearance with the increase of depth. This paper proposes to perform feature fusion on the spatiotemporal memory information and increase the gradient of deep network to improve the long-term video prediction effect of the network. Finally, the moving MNIST dataset and the KTH dataset are used to prove our network. The experimental results show that our method has a certain improvement over the PredRNN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many people are accustomed to searching and managing information on electronic devices, but people only have handwritten documents in some cases. The handwritten character recognition techniques as useful tools for converting hand-wrote documents into machine-encoded text become more and more vital. There are many ways to implement handwritten character recognition, including line and word segmentation, semi-incremental, incremental, etc. However, recognition accuracy and training time are difficult to balance. This paper uses the convolutional neural network (CNN) method to train and fit the data. Firstly, preprocess the training data by resizing the images and abstracting letters from backgrounds. Then, the original dataset was set into the training set and fitting set for testing the model's effectiveness. Secondly, the dataset's CNN model was optimized, and eventually, the letNET module was decided to be used, an 8-layer deep convolutional neural network including the input layer. To test the practicability of this model in real life, a certain amount of handwritten words were set as a new testing dataset. The result shows the accuracy is felicitous even using self-wrote words. To verify the effectiveness of the proposed method, we compare our method with other state-of-the-art baselines, including random forest and support vector machines. The experiment result shows the letNET outperforms related benchmarks in handwritten characters recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conditional Generative Adversarial Networks (GANs) play an important role in the field of computer vision. Its strength not only lies in its stronger feature extraction ability than traditional algorithms but also has achieved unprecedented success in the mutual game method of two different neural networks. The so-called dual network in GAN is called generator and discriminator. The generator generates a picture close to the expected value by inputting the deep learning network. The discriminator judges whether the image generated by the generator is true. Therefore, the mutual training purpose of the two models is to generate high-quality images infinitely close to the specified features. This paper mainly analyzes the basic network structure of Conditional GAN and its practical application in the fields of image generation, image style transformation, font style transformation, and natural language processing. Different branches of Conditional GAN and optimization methods such as InfoGAN, CycleGAN, StackGAN, McGAN, Conditional SeqGAN are also described. In general, GAN has an unprecedented impact on computer vision and image processing with NLP. At the same time, it also has great potential for future development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reinforcement learning (RL) is a widely used learning paradigm today. As a common RL method, policy optimization usually updates parameters by maximizing the expected cumulative rewards and other information obtained in the process of environment interaction. To get a better understanding of the RL and its learning theory, there is also proposed that RL can be regarded as the optimal transport problem in a probability measure space. On this basis, we get a large-scale Wasserstein gradient flow RL method by introducing input convex neural networks (ICNNs) to improve the Jordan-Kinderlehrer-otto (JKO) scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global economy has been destroyed by the COVID-19 pandemic, which has rendered many of the world's population impoverished. More uncertainties about the social policies will appear. Meanwhile, there are many researchers devoted themselves into using machine learning to analyze the economics. starting from the decrease of population, the health crisis has translated to an economic crisis. The spread of the virus encouraged social distancing which led to the shutdown of financial markets, corporate offices, business and event. In this paper, we use the dataset provided by Kaggle platform to analyze the economic effects COVID-19 brings. We choose several metrics, such as the Human Development Index, the total death caused by virus. The model is a hybrid one which combine AdaBoost and Linear Regression. AdaBoost is a kind of boosting model with an optimal performance. We also do the compared experiments using the metric: MSE, the result shows that our model owns the best performance with the lowest MSE score 7.23. The KNN, Random Forest are respectively 2.58 and 2.55 higher than that of our hybrid model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Named entity recognition in medicine is a very important and popular task, which plays an important role in artificial intelligence of Internet medical treatment, and is also the foundation of medical structure of natural language text information in medical field. Medical named entity recognition is mainly a sequence annotation task. Traditional medical named entity recognition methods are based on rules and dictionaries, which have high construction cost and poor generalization ability. In this paper, the text similarity task of online medical interview sentences is used to fine-tune the pre-training model and inject corresponding medical domain knowledge into the pre-training model. After that, roberta- WWM after Finetune was connected with two-way LSTM network and CRF network, and combined with bert-MRC model, voting fusion was carried out. Under strict indicators, the result was 92.02%, better than the popular baseline scheme and has good generalization ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To address the problem that emerged in Washington State regarding hornets, we propose the Vespa Mandarinia Spread model for analysis and prediction of the dispersal characteristics of hornets. We substituted the processed data into the SARIMA model to obtain the predicted spread of hornets in Washington State for the next six months. The predicted results fit well with the actual data. To interpret and predict the correctness of the sighting reports, we propose the Misclassification Likelihood Prediction model to analyze and find the relationship between the content of the sighting reports and the likelihood of misclassification. We classified the text content with the TD-IDF algorithm. Then, we trained a linear Support Vector Machine (SVM) model to predict the probability of misclassification. The prediction results of the model showed that 98% of the sightings were misclassified, and only 2% of the sightings were accurate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Big Data Algorithms and Deep Learning Object Detection
In safety production work and workplaces, many safety accidents occur because workers do not wear safety helmets. In order to reduce the safety accidents caused by not wearing a helmet, this paper proposes a helmet wearing detection method based on the EfficientDet algorithm. First, the K-Means++ clustering algorithm is used to optimize the initial cluster centers, then, the SeparableConv2D network is introduced, combined with the simple and efficient Bi-directional Feature Pyramid Network (BiFPN) proposed in the EfficientDet algorithm to extract image feature maps, finally, the CC-Loss function is used as the classification loss function to constrain the specific relationship between classes and channels, and maintain the separability within and between classes, thereby improving the accuracy of model detection. The experimental results on the open source helmet data set show that under different experimental conditions, the improved efficientset model has better detection accuracy and speed, and can better meet the safety helmet wearing detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object detection has always been one of the hot tasks in the computer vision community, whose goal is to accurately and efficiently identify and locate a large number of predefined categories of object instances from the image. With the widespread application of deep learning, the accuracy and efficiency of object detection have been greatly improved, and thousands of methods have be proposed to improve the detection accuracy and speed. In this paper, we first introduce and analyze the representative methods from the 2D object detection to 3D object detection, and then give an exhaustive introduction to common datasets and conduct a comparative performance analysis of the core ideas of different algorithms. Also, we summarize the existing problems in the detection research field and predict the solutions to these problems in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of TSV technology, the higher layer and lower layer in the vertical part in 3D NoCs are connected by TSV widely. The disadvantage of TSV is that deterministic routing algorithm cannot realize the load balancing for non-uniform flow. In order to solve the mentioned problem, an adaptive elevator selection routing algorithm (ASELB) based on load balancing for TSV is proposed for considering fully low latency transmission. According to the load flow in 3D NoCs, the data packet can choose flexibly any valid elevator to transmit in the layer with the destination node, the layer with the source node or middle layer, which guarantees the load balancing in every layer. The results of simulation show that the load in every layer (especially the load around router with TSV) can keep relative balance and avoid congestion in the certain network, and the performance of our proposing routing algorithm in the network latency is a little superior or equal to the other relative algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past few years, reinforcement learning has become one of the most popular topics in the field of Machine Leaning. Its nature of unsupervised learning has made it rather powerful and convenient for solving specific tasks. Unlike decision trees, reinforcement learning models do not need a pre-configured policy tree to conduct actions, which makes it possible for machinery to solve extreme complex tasks such as playing GO or other games. One of the most powerful networks is the Deep-Q network. DQN picks the optimal action under current situation to achieve the largest reward. However, even though DQN has been tested to work well on single agent environments, there lack experiments of its performance on large massive multi-agent environment trainings. This paper conducts several experiments in Petting Zoo to test the performance and other factors that affect the performance of the DQN in a massive-multi-agent system. Results demonstrates that lager number of agents and experience replay size improve the performance of DQN in a "Battle" environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the problems of large target size difference, blurred imaging and serious occlusion in sonar image, a target detection method based on corner is proposed. On the basis of CornerNet, an adaptive corner feature matching module is added to automatically select the features suitable for the target size and give them high weight, so as to realize the dynamic multi feature map hierarchical prediction and improve the quality of predicted corners; The prediction of embedded vector is cancelled, and a new layer structure is used to replace the embedded layer to realize corner grouping and reduce the false detection rate of the model. The experimental results show that the improved CornerNet has higher recall and better detection accuracy than other sonar image target detection algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an information age, the presence of toxic content has become a major problem for many online communities, and existing methods are not robust enough to detect it. Therefore, the demand for a more accurate and efficient system for toxic messages detection has reached its peak. In this paper, we introduce machine learning and deep learning models to this task. Following the intuition of acquiring the knowledge of both the word itself and its relationship with other words, a stacking model is constructed as the optimal strategy, combining both term frequency-inverse document frequency method (TF-IDF), a robustly optimized Bidirectional Encoder Representations from Transformers pretraining approach (RoBERTa) as the base-model, and neural network as the meta-model. The experiments show that stacking method and K-fold cross validation are advantageous, and our model achieves a detecting accuracy of 0.9023.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
News categorization, a text classification task, is now commonly used in many news websites. However, many of these news classifiers require full content of the news, which would cost great amounts of time for computation. In this paper, we focus on the possibility of categorizing news by its title with Support Vector Machines, Random Forest Classifiers, Naive Bayes, and Recurrent Neural Network. First, we explore some widely used pre-processing methods, including Bag of Words and Word2Vec. Then we combine these different pre-processing methods with the machine learning algorithms mentioned above to create different models. We measure their performances on the News Aggregator Data Set from UCI Machine Learning Repository, which contains over 400,000 pieces of news over 4 main categories. To evaluate the related performances, we use 85% data as a training set and 5% data as a validation set, and finally, use 10% data as a testing set. Comprehensive experimental results demonstrate that even with only the news titles, some models can still perform well in this challenging task. Therefore, it is possible to categorize news through its title in high accuracy yet with a much lower computing cost compared to full-text classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conflagration has occurred more and more frequently nowadays. It is important to detect the fire or smoke real-time in order to give people warnings earlier to prevent the disaster. This paper aims at the problem of precise and economical detecting large fire, from forest fire to cigarettes. In order to maximize the performance of YOLOv5, the structure of its network is replaced by MobileNetV2. Compared with the original YOLOv5, the new neural network performs better in some area on both mobile device and PC. The accuracy of fire detection using YOLOv5-MOB increases 6.7%, the accuracy of smoke detection decreases 9.1%, the speed slows down 4.4%, and little overlapped boxes in detection results in PC. While the accuracy of fire detection using YOLOv5-MOB increases 0.2%, the accuracy of smoke detection decreases 0.8%, the detection speed up 5.2% using CPU and 5.4% using GPU compared with YOLOv5 in mobile device. This makes detection of conflagration more easily and economically in mobile devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast cancer has become the most growing cancer, of which the early diagnosis and prediction require precise medical development tools. However, the accuracy of conventional machine learning classification prediction should be improved. Accordingly, ensemble learning has been proposed, a novel idea of machine learning, which is capable of significantly improving the accuracy of prediction and presenting novel insights into breast cancer disk classification prediction. In this paper, six of the latest ensemble learning classification algorithms (i.e., Xgboost, Catboost, GBDT, LGBM, Random Forest and Extra Tree as an ensemble learning model) are compared with one conventional machine learning algorithm (i.e., K Near Neighbor (KNN)). The original breast cancer data set of Wisconsin is adopted to train the model, and the model effect is assessed using model evaluation indicators (e.g., accuracy, recall, and accuracy) after the model is trained. In addition, the Xgboost algorithm is indicated with the maximum prediction accuracy for breast cancer cells. Furthermore, it was revealed that ensemble learning algorithms generally have higher accuracy than other machine learning algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the comprehensive development and application of machine learning, forecasting supply by machine learning has achieved good results. However, as supply chain fraud is more complicated, using a simple machine learning algorithm can no longer achieve better performance for supply fraud forecasting. In this paper, we propose an XGboost and random forest algorithm to predict supply fraud. This algorithm first uses the random forest to filter out the unimportant variables, get essential variables, and then build an XGBoost model to predict supply chain fraud. Experimental results demonstrate that our proposed XGboost and random forest algorithm achieves great efficiency for supply fraud prediction than logistic, random forest, and XGBoost algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reinforcement learning has always been a research hotspot in the machine learning community, which aims to model the process of investigating the interaction between agents and the environment, making sequential decisions, optimizing strategies, and maximizing cumulative returns. With the rapid development of artificial intelligence technology, the huge research value and application potential of reinforcement learning have gradually become prominent. In this paper, we first introduce the development background of reinforcement learning, and then introduce the application of reinforcement learning from three the aspects of games, finance, and autonomous driving. Regarding the game field, we introduce the methods and results of reinforcement learning in Atari 2600 games and strategy games. For the financial field, we show the application of reinforcement learning in the financial field from the perspective of stock value prediction and deep hedging of derivatives. When it comes to the autonomous driving, we briefly describe the process of automatic driving and show how reinforcement learning constructs strategy function in the field of automatic driving from the perspectives of automatic parking and road form. Finally, we look forward to the algorithms and applications of reinforcement learning, and give suggestions for some future research directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The credit assignment has always been one of the hot tasks in the reinforcement learning community, which aims to estimating and deducing the reward for each agent during the decision-making process. In this paper, based on extensive literature research, we first review the existing solutions to the credit assignment issue, and compare the impact of different credit assignment strategies on algorithms’ final performance. In addition, we also summarize the main challenges of credit assignment and give some suggestions for the future research directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object detection has always been one of the hot tasks in the computer vision community, whose goal is to locate the instances from the image and predict instances’ category. In recent years, with the development of deep learning technology, both the accuracy and speed of object detection have made great progress. However, limited by the low resolution and little feature information of the small objects, detecting the small object is still facing many difficulties and attracting more and more researchers’ attention. In this paper, we first introduce the mainstream object detection algorithms, and then detail the development of small object detection algorithms from the perspective of the data enhancement, context learning, adversarial learning, feature fusion, and other aspects. Also, we analyze the performance of these representative algorithms on the common datasets. Finally, we summarize the existing problems and prospect the possible future development direction in the small object detection research field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current deep learning detection algorithms generally require a large amount of labeled data and it is difficult to collect samples in some application scenarios. An anomaly detection algorithm based on improved Skip-GANomaly is proposed. The algorithm firstly enhances the network's ability to extract image space and channel information by adding an attention mechanism module, and improves the network's ability to extract features. Then, on this basis, this paper uses mixed depth wise convolutional to replace ordinary convolution, so that the network can reduce the number of parameters while enhancing the network's ability to capture different types of patterns from the input image. The experimental results show that the AUC of the algorithm in different categories on the CIFAR10 dataset is generally higher than Skip-GANomaly and its anomaly detection model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object tracking has always been one of the hot tasks in the computer vision community. Given the object’s size and position in the initial frame of a certain video sequence, the task of object tracking aims to predict the object’s size and position in each subsequent frame. Thanks to the rapid development of convolutional neural networks, the object tracking has achieved breakthroughs in accuracy and speed, while still faces many challenges in the large-scale applications such as appearance distortion, lighting changes, motion blur and similar background interference. In this paper, with in-depth literatures survey, we first introduce the typical tracking methods based on machine learning. We also detail the representative tracking methods based deep learning from the perspective of convolutional neural networks, recurrent neural networks, generative adversarial networks, autoencoders and other models. In addition, we analyze the performance of these representative algorithms on the common tracking datasets. Finally, we point out the unsolved problems and prospect the possible future development direction in research field of object tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advent of large-scale data sets and the widespread usage of convolutional neural networks, deep learning technologies have advanced quickly in recent years, particularly target identification methods in the field of computer vision have become particularly important. This work offers a real-time detection system for mask wearing utilizing the upgraded YOLOv5 R6.0 algorithm in the context of the standardized prevention and control of the new crown epidemic, for the phenomenon that individuals still do not wear masks in public places as necessary. First, we collect the mask wearing dataset from the network, then input it into the YOLOv5 R6.0 model, and finally, the training and testing results are visualized by tensorboard. On the test set, the accuracy (precision), recall (recall), and average precision (mAP) of this algorithm can reach 91%, 97%, and 93.7 percent, respectively, according to the experimental data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lane and object detection are important tasks in autonomous driving environment awareness system, and also the basis of vehicle path planning, decision making and control. Convolution neural networks are widely used in various visual detection tasks, providing a new approach for intelligent vehicle visual perception system. Based on deep learning technology, this paper designs a feature fusion module: Cross Convolution module, which is used to improve the accuracy of lane detection. In this paper, a multi-task convolution neural network is proposed to realize lane detection and object detection simultaneously. The network has the characteristics of small number of parameters and low computational cost. Real road images are collected by real vehicles to train the algorithm. Compared with single-task network, the overall detection speed of the model is improved by 61.77% when mAP decreases by 0.83%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Community risk prevention is the foundation of social stability and harmony. Meanwhile, the research about community risk prevention big data platform is crucial for community safety management. On this basis, the design research of community risk prevention big data platform with multi-agent collaboration was carried out. First of all, the current situation of the construction and application of big data platform in grassroots communities in China was fully investigated, its shortcomings of poor multi-agent linkage and low intelligence degree was clarified, and the research direction of this paper was further clarified. Then, on the basis of analyzing the connotation and correlation of multi-subject community and multi-dimensional data of community risk, the functional requirements of community risk prevention platform were analyzed. Finally, the design method of multi-agent linkage big data platform for community risk prevention was proposed from the aspects of main structure, module division and design. Theoretical and methodological support for community management were provided by this study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fully Convolutional One-Stage Object Detection (FCOS) is a one-stage anchor-free object detection model. The detection accuracy even exceeds some two-stage and anchor-base object detection model, but it still has the problem of slow inference speed. Aiming at the problem that the speed of FCOS algorithm cannot meet the requirements of real-time detection, a lightweight object detection model Improved MobileNetV2-FCOS (IM-FCOS) is proposed. First of all, the MobileNetV2 structure is introduced as the backbone network of FCOS. By replacing the ordinary convolution with a depth separable convolution, the model size is reduced while improving the model detection speed. Then, use the Sobel module to add the directional gradient extracted by the Sobel convolution, increase the dimension of the input data, and enhance the edge semantic information. Using the Mish activation function to replace the Rectified Linear Unit (ReLU) activation function in the Head part of the model, retain the semantic information of the negative part, and improve the model generalization ability. Use Distance Intersection over Union (DIoU) as an evaluation indicator to more accurately describe the similarity between the predicted frame and the ground truth, and optimize the regression loss. Optimize the model algorithm from the above three aspects, and finally the experimental results on the Microsoft Common Objects in Context 2014 (MS-COCO2014) data set show that the mean Average Precision (mAP) of the IM-FCOS algorithm reaches 34.1, and Frames Per Second (FPS) reaches 31, which can meet the need for real-time detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An essential part of image processing is feature matching. The commonly used algorithms for feature matching include the scale-invariant feature transform (SIFT) algorithm, accelerated up robust features (surf), and oriented FAST and rotated BRIEF(ORB) algorithm. So far, most of the research on these algorithms at home and abroad mainly focuses on speed, accuracy and robustness, lacking specific analysis for specific scenes. To aim at three classical algorithms, this paper first uses the HPSequences Dataset to test speed and accuracy in general. Then, test the accuracy and speed with the light, angle, and ambiguity changing for a specific outdoor scene. The experimental results show that SURF can still maintain high matching accuracy when the object ambiguity and angle change. SIFT can still maintain high matching accuracy when the object's light changes. The robustness of ORB is the worst. In any case, the matching accuracy decreases the fastest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In education, quality is the focus; improving the quality of education and teaching has always been one of the goals pursued by educators. Machine learning and data mining techniques can reveal data-related laws and extract valuable information and data to solve problems in various fields. This paper proposes a model to predict the National Assessment of Educational Progress (NAEP) exam scores using LightGBM, a kind of GBDT (gradient boosting decision tree) that owns optimal performance in the industry field. It performs a comparison-based experiment using the same metrics and the same dataset. The lower the Root Mean Square Error (RMSE), the better performance that the model will gain. Accordingly, the LightGBM model has the best performance, with 0.544 and 9.344 lower than SVM and Linear Regression, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reinforcement learning is a popular tool nowadays applied in a wide range of fields. Markov Decision Process is the basis of reinforcement learning and the Bellman equation derived from the value function of the Markov decision process has become a fundamental tool for finding optimal policies. However, the application of the basic Bellman formula has significant limitations and is computationally complex, resulting in high time costs and low efficiency. Therefore, there are different variants of the Bellman formula and various solutions to different problems. These solutions include MonteCarlo method, Temporal-Difference learning, Sarsa algorithm, Q-learning algorithm, Deep Q-Networks, Hamilton-Jacobi-Bellman equation and others. This paper reviews these algorithms and compares the complexity of each algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With massive semi-structured meteorological big data as the research object, based on the mathematical model of the research object and the general architecture design of the system, a real-time visualization and analysis processing engine for meteorological big data is constructed; a fast data processing algorithm based on precise location is designed and implemented, and a persistent storage model suitable for the business characteristics of meteorological numerical forecast products is innovatively proposed based on the common NetCDF technology. This paper has built a visualization conversion algorithm for forecast elements and realized real-time analysis and processing of product visualization based on multi-loop visualization product generation and adaptation technology. Now the engine has been applied commercially in the meteorological numerical forecast product processing system of Guangdong Province, generating nearly 300,000 visualized graphic products per day, which plays a positive social benefit in the meteorological decision analysis business and meteorological sensitive social industries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Partial multi-label learning (PML) tackles the problem that each example is assigned a candidate label set, of which only a subset is the ground-truth labels. By decomposing the problem using the first-order strategy, we found that PML is similar to the problem of learning with label noise. Motivated by this observation, we proposed a novel method, PML-CV, which tackles the PML problem with a cross-validation approach. To be specific, PML-CV enhances potentially correct labels by using cross-validation. And then use an example refining scheme to weaken the impact of noisy labels further. We also provide some theoretical analysis to explain the effectiveness of our proposed method. Finally, we conduct extensive experiments on different datasets to verify the effectiveness of our method. The experimental results verify that our method is comparable to current state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robot teleoperation is defined as using a remote control to perform tasks requiring on-site operation while the operator is located far from the robot's on-site operable environment. COVID-19, an acute respiratory syndrome, has led to a rapid increase in people’s demand for robot teleoperation technology. By applying motion information from the human motion to a slave robot, the current study developed a teleoperation system to realize some medical processes remotely during COVID-19. This system has great potential to advance teleoperation technology during and after the pandemic. Our results show that the motion mapping algorithm possesses more accuracy and stability compared to the traditional kinematic method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, with the development of China’s manufacturing industry toward high-precision, the original manufacturing system can no longer meet actual needs, and a new intelligent manufacturing system must be built based on Internet big data and intelligent technology. Based on this, the article studies the intelligent manufacturing system based on Internet big data, and focuses on the data analysis and online monitoring modules in the intelligent system construction to improve the level of intelligent manufacturing system construction, and provide reference for future related work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of smart education around the world, online education is becoming the new normal in the education industry. While online education is booming, a huge amount of data is generated. Online education datasets contain a large number of missing values for various reasons. The existence of missing values hinders educational research, especially educational mining and learning analysis for datasets. Based on Harvard-MITx Person-Course Dataset (HMPC) online education dataset, a batch denoising autoencoder imputation method is proposed. In order to facilitate performance evaluation, the experiment cleans the HMPC dataset, extracts some data that are not missing, and then manually introduces missing data. The experimental results show that the imputation values are similar to the results of the initial data, and the proposed algorithm better preserves the classification performance of the dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper designs a device for analyzing and identifying electrical appliances. In this system, STM32F103 single chip microcomputer is used as the main control chip, and HT7038 as the core electric quantity detection device. Connect HT7038 with current transformer and voltage transformer to analyze and calculate circuit parameters such as voltage, current and power, so as to determine the types of electrical appliances connected to the circuit. Kalman filter algorithm and KNN algorithm are used to eliminate the noise of voltage and current and improve the accuracy of identification. The test shows that the system can accurately measure the current and active power characteristic parameters of large and small current electrical appliances, identify the working state of electrical appliances, and display the data on the screen in real time. The system has a wide measuring range, high learning ability and recognition ability, fast response speed and high accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of complex network, community detection is widely used. Louvain algorithm is a community detection algorithm based on modularity, which has relatively reliable partition results and high efficiency, and can detect the hierarchical community structure. This paper proposes the following improvements for Louvain algorithm: 1. Introduce WCC algorithm, so that Louvain can be applied to disconnected graphs. 2. In the case that leaf nodes and all neighbor nodes are in the same community in the iterative process, direct integration is carried out to reduce redundant calculation. 3. After the calculation of the first layer, a correction process is introduced to make the results of the bottom partition more effective when doing downstream tasks. Compared with the original algorithm, the result has better partitioning effect and faster processing speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many applications, the process of classifying the text document is expensive and time-consuming. Active Learning can reach the same accuracy with only a part of the dataset. In this paper, the evaluation of the probability-based query strategy of Active Learning for text classification was demonstrated. Random Sampling, Least Confidence Sampling, Margin Sampling, Entropy Sampling, Density Weighted Entropy Sampling, Variance Sampling and QBC are compared on the 20NEWSgroup dataset and its subsets with three predictive models, Decision Tree, Naive Bayes, and Logistic Regression as the estimator. The evaluation results can guide people to choose what query strategy can ensure the efficiency and accuracy when classifying text documents by Active Learning. Density Weighted Entropy Sampling and QBC are tested to have the highest prediction accuracy after 1000 iterations. The study confirms that the smoothness of the learning curve largely relies on the estimator and the construction of the committee of QBC has an influence on the performance of QBC. The time consumption for querying is also compared in this paper. Expected Error Reduction Sampling costs too much time in querying, which makes it impossible to implement in a real-word scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
K-truss is a dense subgraph based on clique. Finding the largest k-truss in a graph is an important problem in graph analysis. The truss decomposition algorithm is widely used in finding k-truss because of its simplicity and ease of implementation. With the explosive growth of graph data scale, the traditional truss decomposition algorithm causes a lot of redundant computation and is no longer applicable. The kmax search strategy that combines large, medium and small steps, realizes subgraph prediction through k-core, reduces the data size, uses large, medium and small steps to retrieve, reduces the number of iterations, and eliminates some redundant calculations. However, the middle step and small step use fixed values, and the number of iterations is too many on some scale graphs. This paper proposes a dynamic adaptive step size method, and uses the clique to analyze the intermediate results, ending the iterative process early and improving the execution efficiency. The experimental results show that compared with the existing kmax search strategy combining large, medium and small steps, 30% iterations can be reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning is a systems science based on neural network and machine learning. It abstracts complex data models by using multi-layer based on complex machine structures and nonlinear transformation algorithms. At present, the research results of deep learning have been successfully applied to speech recognition, pattern recognition, target recognition, natural language programming, man-machine chess, artificial intelligence, smart city and other fields. This paper mainly talks about how deep learning evolved and the evolution applied on speech recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid progress of computer technology, software technology based on deep learning presents a diversified development trend. Machine learning technology is based on deep learning technology. It can optimize the performance of computer programs to ensure the enhancement of machine translation ability [1]. The progress of machine learning technology makes computer-aided translation software come into being. The purpose of this paper is to design a computer-aided translation software system using deep learning technology. The writing method of this paper includes theoretical analysis and practical basis, and the basic conclusion of practice is obtained through theoretical analysis. The results show that deep learning can indeed help improve the efficiency and update speed of auxiliary translation software. On this basis, this paper mainly introduces the functional design and main implementation methods of computer-aided translation software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The financial trading system has always been a hot application field in the reinforcement learning community, which aims to predict market trends and maximize profits. However, due to the complexity of the financial market environment, it is still a challenging topic to construct a model that adapts to different market environments based on reinforcement learning. In this paper, we provide a comprehensive survey of recent advances in the field of financial trading systems according to the different design strategies of reinforcement learning, which mainly includes the algorithms based on q-learning and algorithms based on actor-critic. We further analyze the performance of representative methods on common data sets and summary the existing problems of reinforcement learning in financial trading systems and raise some possible improvements in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pathfinding problem has become a popular issue among Artificial Intelligent (AI). This paper discussed the effective way to optimize the optimal pathfinding problem, namely the Markov decision process (MDP). By applying MDP to the Pac-Man game, the current study describes in detail the principle of the algorithm design ideas and evaluated the performance of the results. Through further experiment, the influencing factors of Pacman decision-making under MDP application were analyzed. Experiment results show the value of ghosts is inversely proportional to the rate to win the Pacman game.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robotics Technology and Intelligent Automation Control
Face recognition system based on robot scene interaction is applied in the specific scene of people's life. The life scene is rich and varied, so in order to make the robot naturally integrate into the life scene of human, it needs to enrich its scene processing ability as much as possible. In this paper, face recognition related technologies are used to realize the two scenes of meeting new friends and greeting good friends in the robot scene. Firstly, this paper introduces the practical application background of the system and the application of face recognition in the system. At the same time, it also introduces the background and development status of face recognition at home and abroad. Secondly, combined with the business requirements of the system, determine the overall architecture and module division of the system, as well as the interaction process between each module. Finally, this paper elaborates the realization process of face recognition system based on robot scene interaction, this part is the focus of this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Affected by the north of the Jianghuai cyclone on November 17,2020, a rare rain and snow freezing weather occurred in Jilin Province, which is the largest and widest range of rainstorm, snowstorm, freezing rain and gale disaster weather since its own meteorological records. This paper provides a reference and basis for future power grid planning, design and operation, and maintenance of disaster prevention and mitigation by analyzing the causes of this weathering process and its impact on power grid operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of signal detection, the effective acquisition of weak optical signals has always been the focus and difficulty of research, and the importance of photoelectric detection technology in the scientific and technological development and strategic development of various countries has become increasingly prominent. Aiming at the oil spectrometer using photomultiplier tube as optical detector at present, this paper starts with the design scheme of weak optical signal detection circuit, discusses the basic working principle of photoelectric detection circuit, designs the amplification circuit and active filter circuit using AD8628 as preamplifier, and uses programmable instrumentation amplifier for secondary amplification. ALTERA's EP4CE15F23C8N chip is used as the system processor. The data is processed by FIR filtering algorithm on the processor, and finally transmitted to the host computer for display. The hardware circuit of the system has compact structure, high data acquisition resolution and good real-time transmission, and can carry out multi-channel synchronous acquisition. Experiments show that the detection circuit based on this design can effectively measure weak optical signals, and is suitable for the detection of general optical signals and weak optical signals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-agent reinforcement learning (MARL) is an area of artificial intelligence that investigates joint behaviors of multiple individual agents and emergent patterns arising from their interactions with a common environment. Although MARL has a long history of decades, it begins to intensify recently due to the breakthrough of deep learning methods. In recent years Deep reinforcement learning (DRL) has achieved significant progress in single-agent reinforcement learning problems. Meanwhile, multi-agent systems (MASs) also benefit a lot from DRL methods. Latest advances occur in areas including video games, robot system, smart grids, etc. This article mostly focuses on recent papers on Multi-agent deep reinforcement learning (MADRL). First, some background knowledge of DRL and MARL is introduced. Both value-based and policy-based DRL algorithms are discussed. Second, representative works in both cooperative and competitive scenarios are reviewed respectively. Key ideas and main techniques in each work are discussed. Lastly, the paper draws a conclusion and some potential research directions are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fleet management has achieved great success benefiting from the application of deep reinforcement learning (DRL) in recent years and has yielded many successful commercial applications like ride-hailing services, whose basic goal is to efficiently manage the fleet of vehicles to meet the demand separated temporally and spatially. However, research that provides insight about how existing methods succeeded in dealing with massive agent interactions from a multi-agent perspective is still missing. In this paper, we review the RL methods of order dispatching and vehicle re-positioning in recent years, and classify them from the perspective of multi-agent reinforcement learning (MARL). We provide a comparison of vehicle-based methods, grid-based methods, and order-based methods, along with the popular datasets and open simulators. Afterward, we discuss several challenges and opportunities for the application of DRL in this domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conductor breeze vibration lasts for a long time, and the outlet conductor of the end of the conductor clamp is continuously subjected to dynamic bending, resulting in fatigue strand breakage. Especially for the lines in the low temperature and cold regions, the conductor tension is large and the suspension point is high. The problem of conductor breeze vibration strand breakage is more serious, causing certain damage to the conductor, line fittings and tower components. It is difficult to find out the prevention and control methods, routine inspection also has some difficulties. In the operation and maintenance of the line, it is not only necessary to obtain theoretical support such as simulation, but also more necessary to carry out on-site measurement to evaluate the risk possibility of the operation line. In this paper, through the study of the conductor breeze vibration, the on-site vibration measurement method based on Doppler laser is proposed. It provides a manner for the future development of long-distance non-contact live detection of conductor breeze vibration of transmission lines. The feasibility and correctness of the detection method are verified by the actual detection of transmission lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine learning is the kind of technology that can be used to help a computer generate as much data as people want. In this article, the authors will analyze how to use machine learning to determine what section the person is doing in the pushups. Also, the article will talk about the process involved in obtaining and analyzing EMG signals and the drawbacks that have been considered but couldn’t be solved in the experience temporarily. The authors will also cover the use of MATLAB in the experiment and the use of EMG in areas around the world.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Q-learning is a reinforcement learning method for solving Markov decision problems with incomplete information proposed by Watkins. With the development of reinforcement learning, more and more Q-learning related algorithms have been proposed, and their application range has become wider. In this paper, we discussed single agent algorithms including basic Q learning, deep Q learning and double Q learning. In addition, we discussed multi-agent algorithms including modular Q learning, ant Q learning and Nash Q learning with prominent characteristics. This paper will compare their advantages and disadvantages, and put forward our own views on the current application of Q-learning and the future trend of Q-learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In life, I found that the driver is very nervous on the highway, and after the high-speed ramp, it is easy to cause accidents in complex road conditions. Based on this, this paper focuses on the auxiliary driving collision warning system and dangerous driving behavior detection system. Through quantitative transplantation on ARM64 low-power processor, an early warning anti-collision terminal system suitable for an additional driver is designed. By combining the cloud data platform in the IPv6 network, the network communication test and verification of auxiliary driving function in the vehicle networking system are completed. The designed system can meet the basic requirements of supplementary driving anti-collision alarm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the influencing factors in the preparation process of C4 olefins were studied, and the suitable temperature and catalyst combinations were selected by optimization. The relationship between ethanol conversion and C4 olefin selectivity and the temperature was obtained by least squares fitting, and the model has good accuracy. A multiple regression model was developed to optimize the catalyst combinations under two charging methods A and B with the objective function of maximizing the C4 yield, and the optimal combinations of parameters were obtained for different charging methods and temperatures to provide a reference for the production process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to find locations where wildfires are more likely to occur in many forest areas in Victoria, this paper establishes a wildfire risk assessment model. This paper uses the anti-entropy method to analyze the Fire Radiative Power, Normalized Difference Vegetation Index, and Forest Fire Danger Index to obtain the Surveillance Rank Index, which is used comprehensive evaluation of the possibility and scale of forest fires in a certain place. Then, this paper uses SRI and k- Means Clustering to divide Victoria into several small areas, and performed gradient analysis on each small area to obtain each priority detection sequence of regions in a cluster. Finally, this paper establishes a complete multi-drone cooperative surveillance model to plan the path of drones and select the hovering position of the drones carrying repeater. Therefore, with considering the interference of high-altitude areas to the communication range of radio relay drones, Victoria needs a total of 761 drones, consisting of 639 SSA drones and 122 radio relay drones.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous Underwater Vehicle (AUV) is a complex system. As one of the research fields valued by the state, the control problem of AUV has become a research hotspot in the field. The system model of the vehicle is nonlinear, and the hydrodynamic parameters of the vehicle are uncertain. These problems add higher requirements to the control system design of AUV. In this paper, the control of autonomous underwater vehicle is studied based on the sliding mode control, fuzzy control and adaptive control methods. In depth control and attitude control, a fuzzy sliding mode controller is designed. Based on the design of horizontal path tracking fuzzy sliding mode controller, the sliding mode chattering and tracking accuracy problems in horizontal path tracking sliding mode controller are considered to ensure the performance index of sliding mode switching gain to get rid of external dependence and realize autonomous motion. The research on the control technology of autonomous underwater vehicle has far-reaching significance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In autonomous vehicles, vehicle perception is very important for vehicle behavior planning and execution. In order to solve the problem that it is difficult to use a single sensor in autonomous driving to cope with the complex driving environment, and in the traditional method of lidar and camera fusion, lidar is affected by the weather environment, resulting in serious signal attenuation, short laser detection distance, high price and large amount of data calculation. In this paper, a method for fusion of radar and camera is proposed. In this method, two radars are selected to detect the object and obtain the speed, distance, angle and other information of the object. In terms of vision, the object category is obtained through yolov3 algorithm, and the object information detected by the radar is mapped to the image after semantic segmentation through spatial coordinate transformation, finally, the sensor joint detection model is established by k-nearest neighbor method, and the fused target detection results are obtained. Compared with the fusion method of lidar and camera, this method has the advantages of low cost, small amount of data calculation, strong anti-jamming ability, and can meet the requirements of real-time and accuracy in performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
About parameter setting, this paper introduces a method of Active Disturbance Rejection Control (ADRC) for Permanent Magnet Synchronous Motor (PMSM) servo system based on Radial Basis Function Neural Network (RBFNN). The parameters of Nonlinear State Error Feedback Control Law (NLSEF) and Nonlinear Extended State Observer (NLESO) in ADRC are adjusted by RBFNN, which solves the difficulty of parameter setting caused by the introduction of the nonlinear structure itself in the traditional ADRC. The validity of the parameter setting method of nonlinear ADRC is verified in simulation with the external disturbances. The response speed, Steady-state accuracy and anti-disturbance ability of the second-order ADRC are improved for a PMSM servo tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the security analysis of the situational awareness system, the prediction of the frequency of intrusion events has also become an interesting research point. The accumulation of intrusion frequency in the past and the data after manual screening can assist us in establishing an intrusion perception prediction mechanism. At the same time, with the innovation of hardware technology, we can use deep learning GRU technology to improve prediction efficiency and reduce time-consuming. This article uses statistical data on the frequency of past intrusion events, takes 7 days as a time period for training data and forecasts, uses a hidden layer of 60 neurons and an empirical learning rate. Experiments have proved that the prediction effect of GRU can effectively predict the frequency range of events, and it is expected to be deployed in the situational awareness framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fruit grading is a significant stage of fruit commercialization. It can increase the added value of fruit and enhance its market competitiveness. This paper first introduces fruit grading technologies' background and research status, then summarizes the fruit grading methods based on traditional digital image processing and deep learning. Additionally, the standard fruit datasets are collected and analyzed in this paper. Finally, the fruit grading technology is summarized and prospects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pipeline Magnetic Flux Leakage (MFL) inspection data processing is a necessary part of the pipeline internal detection tasks, which results are the bases of excavation and maintenance. At present, pipeline construction is increasing rapidly. Thus, pipeline inspection and maintenance work will increase year by year. In order to improve the work efficiency and quality, detection data processing should be done automatically or intelligently by computer instead of artificial way. Considering that MFL detection is still the mainstream of long-distance oil and gas pipeline inspection method, some processing technology for MFL data, such as pre-processing, data visualization, MFL image recognition and defect qualification, etc., is summarized, and its difficulties and hotspots are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The booming technology of artificial intelligence (AI) and robotics enable people not only prospect its huge potential, but also limitation of capability when solving inplicit problems comparing with human. As a result, fields like Human- Computer Collaboration (HCC) and Human-Machine Cooperation (HMC) are developed and explored, with relevant concepts such as Human-in-The-Loop (HiTL) being aroused. Whereas, such types of research areas are lack of uniform standards and procedures, even measurements, attributing to the way too specific works to scenarios. Thus in this paper, a review of Human-Computer Collaboration (HCC) and Human-Machine Cooperation (HMC) is conducted from the aspects of vaiances to their common prospects. From data gathering to decision making and support, finally to interaction interface, the processing and operation of HCC/HMC systems are discussed, with key issues like trust level to agents proposed and future works suggested.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the simulation model of sensorless detection system is built, including SRM, power converter and rotor position indirect detection module. The effectiveness of the sensorless motor starting scheme and the indirect rotor position detection scheme in the operation stage proposed is verified by simulation, the more accurate rotor position estimation can be ensured in the low-speed operation stage after motor starting, the response time of motor starting is short. After simulation analysis, it can be concluded that the detection strategy proposed in this paper can be used to obtain the SRM rotor angle indirectly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In fixed position variable amplitude loop current based fractional-N phase-locked loop (FN-PLL), reference spurs could be eradicated and fractional spurs are inversely proportional to the period of loop current. Optimal period design of fixed position variable amplitude loop current is proposed for low spurs and fast locking. By using a fix-position variable-period pulse generator, loop current has fixed position, variable amplitude and larger period during locking, in order to eradicate reference spurs and reduce fractional spurs. And, the period of loop current during locking is optimally designed to reduce fractional spurs as far as possible, and realize fast locking by avoiding FN-PLL working back and forth from locking to transient. Spurs and locking time of a FN-PLL are evaluated when reference frequency is 20 MHz and divider ratio is 120.25. Simulation results show the improvement of fractional spurs could be ignored when the period of loop current during locking is more than 27 times larger than the period during transient, but locking time increases from 9.6 μs to a very large value. Thus, the optimal period of loop current during locking is 27 times larger than the period during transient, and so does the design process at the other situations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an evading strategy based on reinforcement learning for the pursuit-evasion problem considering hypersonic vehicle dynamics (evader) and multiple interceptors (pursuers). Analytical solutions are hard to obtain, so a reinforcement learning method is adopted for the vehicle’s guidance commands in evasion. To cast the problem to a Markov-Decision-Process (MDP), we first establish the motion model involves multiple pursuers and one evader. Moreover, we choose their positions, and velocities change in the three-dimensional space as the state transition, considering a proportional navigation interception guidance. Furthermore, the weighted sum of the zero-effort-miss perpendicular to line-of-sight and the magnitude of maneuver is chosen to be a reward function. After training using a PPO-Clip algorithm, a policy for evasion is obtained. Numerical experiments validate that the method we put forward can be used in the sequential evasion decision against multiple pursuers for a hypersonic vehicle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper takes the quick change tool recognition system of robot workstation as the research object. Through the quick change tool recognition method based on convolutional neural network, the feature extraction and classification recognition of traditional methods are combined to realize the accurate recognition of quick change tools. The experimental results confirm that the convolutional neural network is feasible and effective as an accurate classification and recognition method of quick change tool, but in industrial applications, more complete and large amounts of data are still needed to train the network in order to improve the accuracy and generalization ability of recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the era of Internet of things, with the continuous intellectualization of logistics system and navigation system, more requirements are put forward for the design of positioning system. Aiming at the positioning requirements of high precision and high reliability, this paper proposes a low-power real-time location monitoring scheme based on NB-IoT module BC20. At the same time, it provides subscription programs and other services combined with cloud platform to realize convenient use. The experimental results show that the system realizes the low power consumption of the positioning system based on successful and accurate positioning. This achievement is of great significance to the development of low-power positioning technology in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the pulse wave, a portable cardiovascular function detection system is designed, realizing the synchronous measurement of brachial artery pulse wave and blood pressure, and calculating a number of cardiovascular parameters based on the characteristic points of the pulse wave waveform and the blood pressure value. The system integrates data collection, processing, display, and report query, which is compact and easy to carry. Through comparative experiments with the SphygmoCor pulse wave analysis system, the correlation coefficient r of multiple cardiovascular parameters is greater than 0.79, and the p value is less than 0.01, indicating that the system has high detection accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The multi-agent system (MAS) has always been one of the hot tasks in the distributed computing community. While with the development of reinforcement learning, the novel Multi-agent Reinforcement Learning (MARL) has gradually attracted more researchers’ attention, which aims to solve complex real-time tasks in dynamic multi-agent environment by their interaction and has been widely used in robotics, human-computer match, automatic driving and so on. Different from simple single-agent reinforcement learning, MARL faces some challenges due to the complex relationships among agents and the most influential one is the issue of credit assignment. The credit assignment often causes a substantial impediment to reward distribution, which is because the model only generates the global rewards while the own credit of each individual agent is needed during the model training phase. How to estimate and deduce the reward for each agent becomes a key issue in MARL. According to the difference of strategies, in this paper, we present an overview of the main approaches for credit assignment in MARL from three aspects, including the value-based algorithm, policy-based algorithm and mixing network-based algorithm. Also, this paper makes performance comparisons among these algorithms in different multi-agent experimental environments and finishes basic evaluation of approaches by analyzing the results of experiments. Finally, this paper summarizes the main challenges in multi-agent credit assignment (MACA) with their related solutions, current defects of algorithms about these challenges, and prospects the possible future development direction of the MACA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most of the previous researches on reinforcement learning focused on modifying the learning mechanism of the Markov Decision Process. Researches on training strategy to improve the performance of the MARL algorithm has not received enough attention. Therefore, this paper focuses on the improvement during training process and proposes a gradual promotion training strategy. It is mainly divided into two stages, single combat stage and multi combat stage. Scenario-transfer training, rule-based training, self-playing training and mixed training are used at single combat stage to obtain a strong single agent, namely the pretrain process. At multi combat stage, multi-agent training is introduced, which increases the complexity of the game, so that the strong single agent gradually adapts to the multi-agent task, and the strong multi agent is obtained. This paper combines these two stages of learning tasks with two popular multi-agent reinforcement learning methods, namely Deep Q-learning and Neural Fictitious Self-Play. The experiment found that the gradual promotion training strategy can effectively improve the winning rate and average reward of the agent. Compared with the un-pretrained agent, the average reward is improved by 25% and winning rate is improved by 44%; at the same time, it is an extremely convenient and easy training strategy to implement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, automobile intelligence and automatic driving technology has become the development trend of the automobile industry. Lane detection and lane segmentation are important parts of automatic driving technology. Unconscious lane departure on the expressway is likely to lead to serious traffic accidents. In this paper, a method for simultaneous lane detection and pixel segmentation is designed to solve the problem of lane detection and lane segmentation of lanes on expressways. The binary detection of lanes and the segmentation results of lane pixels are combined to determine the clear lane boundary. Finally, the discontinuous part is fitted into continuous lane lines, and the obstacles in different lanes are classified by lane lines.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Manipulator plays an important role in industrial production and can complete a number of tasks, such as handling, picking, sorting. However, its working environment is not necessarily very spacious, there are other equipment or objects in the working environment, such as shelves, railings, or computers, and the manipulator will be blocked by them when moving. Therefore, it is necessary to use algorithms to ensure the ideal path and reduce the path deviation of motion and obstacle avoidance. Thus, this paper aims to prove that whether the manipulator can find a feasible motion path under many obstacles with RRT * algorithm. Meanwhile, the D-H method is used to establish the manipulator model, then the RRT* algorithm is studied for path planning, and finally verified by MATLAB simulation. The results show that the manipulator can quickly avoid obstacles and reach the target point under the control of path planning algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the meteorological industry is developing well, the development of meteorological collection technology is more and more rapid. Nowadays, the manual collection technology has gradually fall and gradually replaced by automatic collection. Data collection and transmission are the main steps of meteorological collection. The system consists of software system and hardware system. The hardware circuit includes STM32 single chip, 4G chip, DHT11 temperature and humidity sensor, PM2.5 sensor, photosensitive sensor and sulfide sensor. The software design includes server design and database design. The system first collects data from the hardware circuit, and then uploads it to the server after corresponding processing. After receiving the information, the receiving end can display it through the server terminal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using the characteristic that the vapor pressure of the low-boiling point liquid will change after sensing the temperature is used as the power source of the clock, which will promote the development of the middle watch. Therefore, the thermal response analysis of the low-boiling point gas-liquid device is the theoretical basis for the subsequent design of related devices. This paper proposes to establish a mathematical model for the thermal response process of a closed vessel using the mass conservation equation and the energy conservation equation. Combined with the analysis of the physical properties of the selected safe and low-boiling 1-Chloro-1,1,2,2-tetrafluoroethane, a numerical simulation of the vessel was carried out. The simulation results show that there will be three stages with different initial gas-liquid ratios in the container, including complete gasification, complete liquefaction and intermediate transition stages. With the increase of the initial liquid phase ratio in the vessel, the rate of temperature rise and the rate of pressure rise will decrease. And it is concluded that the gas-liquid device should take the lower initial liquid ratio as the initial condition under the working conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The TianHe manipulator is responsible for major tasks such as China Space Station construction, operation, and supporting astronaut’s extravehicular operations. The joint is the direct execution part of the manipulator to realize flexible movement, and it is the key component to ensure the core functions and performance of the China Space Station manipulator, The composition of the joint is complex, and the experience of its reliability design is very poor. So the research of reliability design method for the joint is of great significance. Starting from the joint reliability design method, this paper uses various methods such as redundant design, margin design, and environmental adaptability design to complete the joint reliability design comprehensively. The ideas and methods for reliability design for the joint of the China Space Station manipulator introduced in this paper have important reference value for the reliability design for the space complicated electromechanical products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional rotary drilling technology has the problems of high-required axial force, high power consumption, large mass and serious wear of the drill bit. It is difficult to be better applied in the detection of extraterrestrial celestial bodies with weak gravitational environment and hard rock layers. The high-frequency micro-amplitude impactor can greatly reduce the drilling pressure and power consumption by using high-frequency impact to drill objects with high hardness such as rocks. Therefore, based on the piezoelectric transducer, a high-frequency micro-amplitude impactor is designed by analyzing the configuration of the horn and the collision system of the spring-mass system. It is verified that the no-load frequency of the impactor is above 1000Hz, and that the amplitude is about 0.2mm. Finally, the validity of the design of the high-frequency and micro-amplitude impactor is verified by the sandstone-drilling experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of surface mounted technology, the punching speed and quality of SMT carrier tape punching machine has become a major challenge to ensure efficient production in the electronics industry. The traditional SMT carrier tape punching machine with the disadvantage of slow-running and low quality punching is difficult to improve production efficiency. Aiming at the problems of low efficiency and poor quality punching of traditional SMT carrier tape punching machine in processing, a high-speed SMT carrier tape punching machine is designed through theoretical research and optimization of dynamic simulation by Creo software. The results show the high-speed operation of cam mechanism and the parallel arrangement of multiple punching pins, and the carrier tape is produced at a speed of 6 m/min, with the accuracy of the carrier tape after punching being increased by 10% to 20%. In addition, rapid punching and high-quality punching is realized, and the machine is stable, reliable, and easy to operate. The parameters are tested accurately, which meeting the requirements of users can reduce the energy consumption and production cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the problem of high energy consumption of ordinary hydraulic brick presses, the scheme of energy-saving transformation is proposed to replace the asynchronous motor with AC permanent magnet synchronous servo motor. The simulation models of the ordinary hydraulic brick press and the reformed servo direct drive pump-controlled hydraulic brick press are established in AMESim, and the energy consumption simulation and analysis are carried out. The results show that the modified servo direct drive pump-controlled hydraulic brick press is 20.7% more energy-saving than before the transformation. The reasons for energy saving lies in that, compared with the ordinary hydraulic brick press, the servo direct drive pump-controlled hydraulic brick press saves overflow loss in the pressure holding stage and reduces the dormant loss in the dormant stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the lightweight requirements of the base structure of an electronic equipment, after comparing two prefabrication methods, the main body and the pawl are three-dimensionally woven and then hanged to form an integrated solution, which solves the problem of the strength guarantee of the screw connection position and the surface cracking after the mechanical test. , passed the verification of the installation test, effectively solved the problem of lightweight base, completed the 3D braiding technology research of complex parts, and provided an important reference for the subsequent lightweight design of related structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid increase of aging population in China, more and more power-assisted devices need to be used. In this paper, aiming at the walking problem of the elderly or the disabled, a lower limb exoskeleton assisted robot based on motor drive is designed. This article first briefly introduces the mechanical structure, hardware system and software control system of the lower limb exoskeleton, and then proposes the assist control algorithm based on the dynamic model of the lower limb exoskeleton robot. Plantar pressure is used as the metrics to evaluate the effectiveness of the power control algorithm. Experimental results verify the effectiveness of our proposed assisted control algorithm. Experimental results show that the proposed algorithm can significantly reduce the plantar pressure of the wearer, which verified the effectiveness of the assisted control algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The existing rehabilitation robot mechanism design only considers the mechanical indexes, and can not characterize the kinematic and dynamic characteristics of the generalized closed-loop mechanism composed of robot mechanism and human musculoskeletal system in the process of physical interaction between robot mechanism and human body. Therefore, this study constructs the coupling simulation platform of human musculoskeletal system and upper limb rehabilitation robot mechanism by combining OpenSim and MATLAB / Simulink simulation software. By studying the force transmission characteristics in the process of human-robot interaction, this paper reveals the variation law between the kinematic and dynamic parameters of the manipulator of the upper limb rehabilitation robot and the human comfortability, which is the evaluation index of human-robot interaction performance. On this basis, the multi-objective genetic algorithm is used to optimize the structure of the upper limb rehabilitation robot. The results show that the upper limb rehabilitation robot with optimized mechanism parameters represents higher comfortability in the process of interaction with human body, which provides an important reference value for the mechanism design of rehabilitation robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the inverse kinematics of series-parallel hybrid 7-DOF humanoid manipulator, an analytical algorithm based on adaptive parameterization of joint posture angle is proposed. Firstly, according to the configuration of the humanoid manipulator, the schematic diagram of the mechanism is established, and the forward kinematics is derived. Secondly, the posture angle of the shoulder joint is adaptive parameterized by the moderate principle, and then the rest of the joint posture angles are deduced by the analytical method to complete the inverse kinematics solution. Finally, the correctness of the proposed inverse kinematics method is verified by simulation analysis, and the advantages and disadvantages of the joint posture angles are evaluated by using the evaluation function, which shows that it is reasonable to adopt the moderate principle for the parameterized joint posture angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Industrial robots have great advantages in the processing of large and complex components in the aerospace field, but the lack of robot joint stiffness results in poor processing accuracy. This paper first analyzes the stiffness of the robot's joints and establishes a joint error model; Secondly, the kinematics modeling of the KUKA KR 600 robot was carried out by using the Modified D-H method, the established model was calibrated by MATLAB, and the Jacobian matrix J was calculated; Thirdly, the stiffness of the robot joints was identified through experiments; Finally, the joint error model validation was carried out. Results showed that the relative errors between the predicted and actual measured values in the x, y and z directions are 21.39%, 17.01% and 14.46% respectively. It is proved that the established joint error model shows large potential in predicting the deformation of the robot end.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aiming at the problems of single function and low intelligence of shoe dryer, an intelligent solar shoe dryer integrating sunbaking, drying, and sterilization based on internet cloud technology is designed. Firstly, the mechanical structure and double cam motion system of the shoe dryer are innovatively designed. The mechanical system and motion are designed and optimized with the joint simulation platform of SOLIDWORKS and MATLAB, and the rationality is verified. Secondly, the control system based on STM32 is developed. Intelligent opening and closing, automatic power-off are realized with temperature sensor, humidity sensor. Finally, the stress of loaded parts is simulated based on ANSYS to verify the rationality of structural design and material selection. In order to facilitate users’ remote control, a client platform is developed based on internet cloud technology. The platform realizes information storage and remote control, and other value-added services, making the use of the shoe dryer more intelligent and convenient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In mobile robots, one crucial factor is the ability to remain balanced. Developing mechanics for such mobile robots to be able to get back up from a fallen state if of great importance. This paper presents a potential algorithm that can efficiently solves such stated problem for mobile, four-wheel-drive robots. An inertial sensor can be used to collect data of linear acceleration, angular velocities, and rotational dynamics from the robot. The data can then be used in a PID controller program to find the maximum speed a robot can travel and both the maximum acceleration and deceleration without tipping over. An environment is simulated in WeBots for evaluation of the implementation of this algorithm on a four-wheel-drive robot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High Performance Computing and Wireless Communication Image Processing
Deep neural networks (DNN) can approximate value functions or policies for reinforcement learning, which makes the reinforcement learning algorithms more powerful. However, some DNNs, such as convolutional neural networks (CNN), cannot extract enough information or take too long to obtain enough features from the inputs under specific circumstances of reinforcement learning. For example, the input data of Google Research Football, a reinforcement learning environment which trains agents to play football, is the small map of players’ locations. The information is contained not only in the coordinates of players, but also in the relationships between different players. CNNs can neither extract enough information nor take too long to train. To address this issue, this paper proposes a deep q-learning network (DQN) with a graph neural network (GNN) as its model. The GNN transforms the input data into a graph which better represents the football players' locations so that it extracts more information of the interactions between different players. With two GNNs to approximate its local and target value functions, this DQN allows players to learn from their experience by using value functions to see the prospective value of each intended action. The proposed model demonstrated the power of GNN in the football game by outperforming other DRL models with significantly fewer steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compared with pairwise/triplet similarity based hashing, central similarity based hashing can capture data relationships more efficiently. However, previous works still have the limitation of feature expression capability due to information redundancy and they simply use similar/dissimilar label pairs so that they cannot fully capture the complex semantic information of the visual content, which limits the retrieval performance to some extent. To address the above issues, an attention-based hashing with central similarity learning (ACSH) for image retrieval is proposed. Firstly, it uses an off-the-shelf algorithm to generate semantic hash centers with sufficient hamming distance between each other for data points. Secondly, it embeds a spatial attention mechanism into the feature extraction module, enabling the deep hashing network to focus on important features and suppress unimportant features. Finally, in the training phase, a classification task is introduced to supervise the feature learning of the spatial attention mechanism to capture more complex semantic information of the visual content. Comprehensive empirical evidence shows that ACSH has better retrieval performance than the existing deep hashing methods based on central similarity on three standard benchmarks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, deep reinforcement learning is one of the hot topics of artificial intelligence, which can be applied in many fields. However, it also faces many problems and challenges, such as insufficient sample data, large sample space, complex action space and so on. The emergence of AlphaGo solved the problem of large sample space very well. After that, artificial intelligence such as AlphaGo Zero and AlphaStar were released. These intelligent frameworks can be applied in various scenarios. Differentiating from other works, in this paper, we focus on the in-depth analysis of the internal connections of Alpha series from the perspective of the problems and challenges solved to give an insight for the future development of reinforcement learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The arrival of COVID-19 has led to the emergence of a large amount of information, and inaccurate information will lead to group polarization and cognitive dissonance, so the management of public opinion information has become urgent. In this paper, the COVID-19 public opinion information analysis system selects the data sources on weibos as reference, and weibo hot topics, as the platform that users mainly focus on, are more authoritative and more credible compared to emerging media As the main platform for users' attention, Weibo hot topics are more authoritative and credible than emerging media. The selection of such a large platform can also ensure the openness and transparency of information. This paper focuses on a series of development processes such as acquisition and analysis of COVID-19 data and database design of COVID-19 public opinion information analysis system. The system not only realizes the basic functions of COVID-19 public opinion information analysis system, but also runs smoothly and interactively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Language styles, including word choice and syntactic structure, are all pivotal factors that affect the formality of the text. The formalization of text style is a sub-problem of text style transfer, which aims to transform daily expression into academic style. SeqGAN bypasses the indifferentiable problem in backpropagation caused by the discrete nature of the token, and pioneers the application of GAN for text generation. In the formalization of text style, GAN was not used for formality style transfer which is a task worth exploring. We apply the Monte Carlo idea in SeqGAN to the task of formalizing the text style, that is, we use the sampling method to obtain the state-action value. In academic writing, the choice of key words would affect the quality of the entire sentence. In this paper, we propose the Masked SeqGAN to cope with this problem. The architecture of our proposed model is similar to SeqGAN, but the difference is that after the complete sentence is generated, a <𝑚𝑎𝑠𝑘> tag is added to the current position and the discriminator scores the sentence marked with mask and the original sentence separately. The difference in score indicates the contribution of words to the entire sentence. Words with high contributions will be considered important words, and this contribution will be used to update the policy. Experiments show that Masked SeqGAN is better than previous GAN-based methods, both in terms of automatic scoring and manual scoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present, the economy is developing rapidly in China, the process of urbanization is also accelerating, and high-rise buildings are springing up like mushrooms after a rain. In the process of pursuing high-quality development, the safety of buildings is the primary consideration. After the completion of the Beidou satellite navigation system independently developed in China, the navigation industry has also developed rapidly. The application of the Beidou satellite navigation system to the safety monitoring of high-rise buildings has become a new trend in building development. In this paper, the application of Beidou high-precision positioning in high-rise building monitoring is investigated and reviewed, moreover, the future research directions are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent deep learning models developed to address classification problems related to medical imaging for diagnosis focus on supervised models. However, supervised models are highly dependent on a large amount of accurately labeled training data to train the models. Obtaining such accurately labeled training data is expensive, time-consuming, and requires human expertise, which challenges the generalizing ability of deep learning models in many medical fields. To address this problem, we propose MeanMatch, a semi-supervised method that use a limited amount of labeled data and many unlabeled data to train deep models. We also employ the voting technical to assemble the deep models to improve the generalization performance. Finally, we conduct experiments on the breast-ultrasound image dataset to verify the effectiveness of our approach and compare it to the current state-of-the-art semi-supervised methods. Experimental results show that our method achieves the highest classification accuracy of 83.0\% on the test set, which outperforms or is comparable to the comparison methods and the supervised models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of the data determines the quality of the model. In this paper, the grassland degradation data in the Headwaters of the Three Rivers were processed in the early stage and labeled with multiple classification. Guided clustering and semi-supervised clustering were used for comparison. The two methods were combined to classify and label the data, so as to improve the accuracy and completeness of the classification data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current hardware implementation of the communication system suffers from poor scalability and portability. RFNoC development framework effectively solves these problems. This development framework has simple and flexible development features. This has made it one of the mainstream technologies for software radio device development. This paper implements OFDM communication transmission system with the help of RFNoC using AIR-T embedded radio as a platform. The OFDM signal modulation and demodulation module is implemented on the FPGA chip, and the OFDM signal transmitting and receiving module is configured on the AD9371. From the experimental results, it can be seen that the RFNoC framework can efficiently complete the development of OFDM signal transmission system, and the system runs well on the AIR-T hardware platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Firstly, the MIMO-ISAR echo model based on the maneuvers model is constructed. Aiming at the difficulty of realizing large bandwidth signal by existing equipment, the time domain bandwidth synthesis and frequency domain bandwidth synthesis methods of multi-carrier frequency MIMO radar are studied, so as to achieve high resolution of range image. Simulation results verify the feasibility of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increase of the number of frequency terminals, the electromagnetic spectrum resource becomes particularly important. The traditional electromagnetic spectrum monitoring system is faced with the problems of insufficient transmission bandwidth, shortage of monitoring node resources and large human interference factors. In this paper, edge computing technology is applied to electromagnetic spectrum monitoring, and edge computing processing framework KubeEdge is used to establish a spectrum monitoring system. The system is divided into cloud and edge monitoring node, and multiple distributed spectrum monitoring nodes can be deployed. Taking the signal modulation recognition application as an example, the results show that the system can effectively deploy spectrum monitoring applications to intelligently identify signal modulation modes, and the recognition accuracy of some modulation modes is close to more than 90%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computer technology is in a state of rapid development, and its application scope is constantly expanding. Nowadays, many colleges and universities introduce computer technology into tourism English teaching to improve teaching efficiency and quality and promote students' practical ability. Tourism English requires higher students' practical ability. Under the traditional teaching mode, students' learning efficiency is not ideal. Therefore, more and more teachers began to introduce computer technology into teaching work to improve teaching quality and efficiency. Therefore, this paper introduces the characteristics of computer-assisted instruction, analyzes the application of computer technology in tourism English teaching, and discusses the specific application of computer technology in tourism English teaching from two aspects: establishing the computer index system of tourism English professional training mode and reasonably applying computer translation software, hoping to provide corresponding reference for tourism English teaching in colleges and universities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, GNNs have becoming one of the hottest topics of deep learning due to their powerful ability in modeling relational data and their wide applications in real world. Besides, reinforcement learning is widely applied in many fields. However, the tradition way to train the reinforcement learning is not that accurate enough. It can be assumed that it is wise to combine GNN and reinforcement learning, making good use of the modeling capability of GNNs, to enhance the reinforcement learning performance. In this paper, the most ordinary GNN is compared within the existing works to see whether it is possible to combine GNN and reinforcement learning. PettingZoo is chosen on behalf of the multi-agent system, and a Graph Convolutional Attention Network with Q-net (GCANQ) is designed and validated, which shows the efficient computational capability of GNN. Experiments show that GNN performs strikingly well on a common multiagent system and in a small scale with some overfit as well. Moreover, the speed of the training of GCANQ is acceptable for the complex calculation needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machine vision inspection combines a fast and accurate computer with image processing technology, which is of great help to improve the ability of intelligent image recognition. The ultimate goal of the recognition and classification system is to identify the detected defects quickly and accurately. The defects with obvious characteristics can be identified in a short time due to their great differences. Due to the imperfect recognition technology and the low robustness of the algorithm, the detection is inaccurate and the difficulty of the recognition effect is increased. In this paper, based on the targets with similar characteristics, the membership function of the target to be recognized is constructed after extracting the characteristic parameters of the recognized target. The characteristic function of the fuzzy set is established to calculate the membership degree of the target sample belonging to the fuzzy set, and finally, the classification of the characteristic sample is recognized according to the maximum membership principle. The experimental results show that the method of this paper is based on pattern recognition and has good results, which has guiding significance for all kinds of image feature recognition and processing and will greatly reduce the intensity and workload of manual processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to study the transformation and upgrading capability of high-tech enterprises and formulate the transformation and upgrading countermeasures, this paper establishes the evaluation index system of transformation and upgrading capability of high-tech enterprises according to the strategic environment and operation characteristics of high-tech enterprises, and designs and implements the investigation scheme. By Using the method of factor load analysis, this paper looks for the important factors, more important factors and general factors that affect the transformation and upgrading ability of high-tech enterprises. At the same time, the variable contribution rate model is used to further realize the overall analysis of the transformation and upgrading path of high-tech enterprises, so as to provide a scientific basis for formulating the transformation and upgrading countermeasures of high-tech industries. At the same time, by using variable contribution rate model, this paper realizes the overall analysis of the transformation and upgrading path of high-tech enterprises, and provides scientific basis for formulating the transformation and upgrading countermeasures of high-tech enterprises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Policy improvement has a long history and is the essential element in dynamic programming. The general categories of policy improvement can be divided into four aspects including: heuristic methods, approximation methods, sampling methods and numerical improvement. Paralleling with the classic policy improvement methods, several variant tools are also introduced including Lambda Policy, Path Integral, High Confidence in Policy Improvement and Finite Sample Analysis for SARSA in linear function. Moreover, the introductions of those policy improvement methods, evaluation, and comparison between them are illustrated in this paper. There are totally three perspectives where this paper dissects the evaluation from training speed, sampling efficiency and methods ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Internet environment is becoming more and more complex with the increasing scale of users and richness of network applications recent years. Malicious network flows are difficult to be inspected with only five-tuple information contained in the network packet header, and application layer traffic analysis has become an important basis for network security. Deep packet inspection (DPI) is an application layer-based flow monitoring and identification technology, which inspects and analyzes each network data packet through a regular expression matching system to check the compliance and security of network data packets. However, with the continuous increment of Internet bandwidth, limited by the computing capability of the processor and the computational complexity of regular expressions, the content recognition subsystem based on the central processing unit (CPU) is hard to identify malicious network flows in a high-speed network environment with large traffic so that the regular expression matching system became a bottleneck in network security. A filed programmable gate array (FPGA)-based regular expression matching acceleration system with high computing parallelism is proposed this paper. And the regular expressions are converted into state transition tables and transferred into Verilog HDL through lexical analysis. The data throughput and development efficiency are rapidly improved with innovation design upon hardware structure and compilation tools. The function and performance of the system were verified with simulation software and it is proved that the throughput of the system based on Xilinx Alevo U200 acceleration card will reach at over 50Gbps under condition that more than 10,000 regular expressions with thirtytransition-state implemented. This work will greatly improve the entire performance of the firewall and shorten the system iteration and upgrade cycle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A generative adversarial network (GAN) is a critical invention in machine learning, and this network has made great progress recently. From the overcoming of target classification to the unification of target detection and segmentation, the fortresses of deep learning have been conquered one after another during the great development of all kinds of GANs. GAN is a class of deep learning frameworks constructed by two neural networks, which have two models named generator and discriminator. The two models learn anything by playing games with each other to produce convincing output. Machine learning practitioners are increasingly turning to the power of GANs for digital image processing, and the applications that benefit from using GANs include generating art and photos from text-based descriptions, upscaling images, transferring images across domains, and many others. Thus it is meaningful to do review research on the representative enhanced GAN architectures that have been devised with their unique features for solving specific digital image processing problems. In this paper, we will review the revolution of GANs and the improvement in digital image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning techniques have been popularly applied for artistic tasks such as turning photographs into paintings or creating paintings in the style of modern art. However, East Asian arts are largely ignored. In this paper, we aim to apply deep learning models to create Chinese traditional landscape paintings. We achieve the goal through two deep learning techniques: image style transfer and image synthesis. We apply the Visual Geometry Group (VGG) Network to do the style transfer, which is trained on a pair of a content image and a style image, and the goal is to output an image that renders the target content with the desired style. For the image synthesis, we apply the Deep Convolutional Generative Adversarial Network (DCGAN), which requires a large set of painting images to produce as realistic as possible non-exist paintings that mimic the training dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Age progression and regression is a task that aims at rendering face images with or without the "aging" effects. The problem is originally generated from the psychophysics and human perception community but now has found tremendous interests in the computer vision community in recent years. In this paper, we give a detailed analysis of the facial aging problem and conduct a comprehensive survey on the existing methods. There are many different methods available for face aging rendering, and each has its own advantages and purpose. We categorize the existing methods into three classes: physical-based models, example-based methods, and Deep learning-based methods. The first two classes are more traditional methods that have been developed in the last few decades, while the deep learning-based methods are leveraged on the huge success of the deep learning models that emerged in recent years. We review the representative works in each category and offer insights into future research on this topic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The combined use of intrinsic optical imaging and electrophysiological recording has become an important method to reveal the fine-scale structure of orientation map in the primary visual cortex. However, it often needs many repetitions to obtain the mean activity as a result of the low signal-to-noise ratio of intrinsic optical imaging. To overcome this problem, we proposed a Bayesian method to obtain the highly accurate orientation map with less repetitions by fusing the intrinsic optical imaging and electrophysiological recording. We first used a Gaussian regression model to obtain the posterior distribution of the cortical orientation map with the intrinsic optical imaging data. And then we computed the conditional distribution of orientation map given the measurements from electrophysiological recording. The simulation results suggested that our method had significant improvement of performance compared with the classical methods and was very robust to noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the continuous popularization of the concept of sharing economy, more and more things are beginning to realize sharing development. Among them, in the field of logistics, the concept of sharing economy is combined with the concept of green environmental protection, thus forming the green development path of logistics express delivery such as shared express boxes. However, the existing shared express packaging box has some problems, such as single form and structure, lack of professional recycling system and so on, which can't really reflect the actual value of shared express packaging box. Accordingly, based on the concept and value of shared express packaging box, this paper points out the design requirements of shared express packaging box from the perspectives of material performance, structural design and recycling, and introduces various recycling modes of shared express packaging box.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have conducted a series of studies and analyses to address the problem of line art colorization. We chose Generative Adversarial Networks (GANs), a leading neural network architecture for solving this problem, as our focus. For a large number of studies based on this architecture, we improved, applied, and analytically compared four methods, pix2pix, pix2pixHD, white-box, and scaled Fourier transform (SCFT), which can represent the mainstream problem-solving direction in the field of line colorization to the greatest extent possible. Finally, two reference quantities were introduced to quantify the results of the analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses WGAN, an important variant of the GAN model, and applies it to the generation of financial asset time series. Both WGAN and RNN can be applied to generating long period time series. It is found that WGAN is better than RNN in the authenticity of the generated data. The RNN model has the advantage of preventing the gradient from disappearing. WGAN model uses Wasserstein distance to measure the distance between a real distribution and a generated distribution, which overcomes the defect of JS distance in the original GAN model. RNN and WGAN are used to generate the daily frequency and S&P500 monthly frequency yield series of the Shanghai Composite Index. The results show that both can reproduce the long-term correlation and other characteristics of the real series, but WGAN generated series is significantly improved than RNN's. WGAN model can be applied to the generation of financial asset time series, and the effect is better than RNN model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of power grid technology, the coordinated development of multiple types of source, network, load and storage is the future development trend. Advanced technologies based on blockchain and edge computing can efficiently utilize the advantages of energy blockchain and solve problems such as data security and flexible user transactions. To this end, this article formulates an important business application scenario of blockchain in the source network load storage, and proposes its design plan. At the same time, this article combines typical actual cases to calculate and analyze the relationship between the degree of decentralization and the operating efficiency of the microgrid. On this basis, this paper calculates the operating costs under different conditions, and finally puts forward the best operating strategy. The design scheme proposed in this paper can provide a new calculation reference for realizing the economic, high-efficiency and safe processing of a large amount of power transaction information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In view of the complex soil structure in substation site area and the difficulty in calculating grounding parameters, this paper studies the equivalence of multi-layer soil structure to uniform soil model. Based on CDEGS software simulation, the calculation of grounding parameters of uniform soil resistivity structure is analyzed, and the influence of surface soil resistivity, grounding grid area and reflection coefficient on equivalent uniform soil resistivity is studied. The equivalent uniform soil resistivity corresponding to the double-layer soil structure is only related to the area of the ground grid, but has little relationship with the shape of the ground grid. Combined with the general design scheme of 110kV and 220kV transmission and transformation project of State Grid Corporation of China, the equivalent uniform soil model is simplified, and the influence of grounding grid area on the calculation of grounding parameters is reduced. A fast calculation method of grounding parameters of double-layer soil structure is put forward, which is more practical in engineering design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve the engineering test efficiency of urban rail transit CBTC system, a test sequence generation method based on test line is proposed in this paper. First, test cases are classified, and line conditions required for test case execution are extracted for test cases that need to be executed in specific line scenarios. Then the test line is divided, the characteristic data of the test line is extracted, and the test case is matched with the characteristic of the test line. Finally, the test sequence is formed by connecting test cases based on series algorithm. In view of the high repetition rate of some test cases, the test value of repetition rate is set to effectively reduce the repetition rate of test cases. Through the test and verification on CBTC simulation test platform, it is proved that the test sequence generated by this method is highly executable and significantly saves the test time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assembly gap of structure often occurs during the manufacture of civil aircraft, which may cause additional stress, structure failure or even airplane disaster. This article introduces 2 types of methods for solving the problem. The article studies forced assembly and shim installation procedure and certain criteria. In the end, the article figures out static strength evaluation for free shim and fixed shim.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to investigate the influence of the front centroid position on the front lifting mechanism, Taking the antenna array lifting mechanism of vehicle-mounted radar as the research object, the theoretical model of the antenna array lifting mechanism was established by comprehensively considering the influence of the antenna array centroid deviation Angle and wind resistance, and the influence of the array centroid deviation Angle on the thrust required by the antenna array lifting cylinder during the antenna array lifting process was compared and analyzed. The thrust of the radar array lifting cylinder is measured and analyzed with the theoretical analysis results. The results show that during a period of antenna array lifting, the deflection Angle of the center of mass of the array has a significant effect on the variation of the gravity component and the thrust of the array lifting cylinder, especially on the gravity component and the maximum thrust of the lifting cylinder and their occurrence position. In the process of lifting, the calculated results are in good agreement with the measured values, which proves the correctness of the calculation method and the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object feature representation is one of the most important links in visual tracking. Traditional single feature are difficult to accurately describe the appearance characteristics of the target, and it is difficult to accurately track the target for large appearance changes. In view of the above problems, we use the convolution feature and color histogram feature to train the deep and shallow correlation filters respectively to jointly represent the object features, and propose a dynamic weight strategy to adaptive combine the two features. The background information around the target is introduced by introducing the context aware framework as the training samples to jointly train the filter. The combination weights of deep and shallow layers are calculated through the dynamic weight strategy, so as to realize the adaptive combination of the two features. The proposed algorithm is tested on OTB2015 data set. The results show that the algorithm can improve the accuracy and success rate of tracking
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Taking the antenna array lifting system of radar as the research object, considering the influence of factors such as the wind resistance and wind speed, a mathematical model of the antenna array lifting system is established. The results under different wind resistance (wind speed) are compared and analyzed, during the lifting process of the antenna array, the influence of wind resistance on the change of the thrust required for lifting the antenna array. The thrust required in the lifting process of the antenna array is measured and compared with the calculated results. The results show that the wind resistance has a significant effect on the thrust required for antenna array lifting, especially on the maximum thrust and its position; The measured results show that the calculated results are in good agreement with the measured values, which proves the correctness of the calculation method and model. Therefore, the influence of wind resistance must be considered in the design of antenna array lifting system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.