期刊:
International Journal of Circuits, Systems and Signal Processing,2018年12:215-219 ISSN:1998-4464
通讯作者:
Hu, Zhigang(zghu@csu.edu.cn)
作者机构:
[Hu, Zhigang; He, Feng; Zhou, Yuanyuan] Software College, Central South University, Changsha, Hunan, 410075, China;[Liu, Wei] School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, Hunan, 410208, China
通讯机构:
Software College, Central South University, Changsha, Hunan, China
关键词:
Evaluation function of the weight index;Medical insurance fraud;Outlier detection;Particle swarm optimization;Weighted k-means algorithm
作者机构:
[Peng, Ying-Ying] School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, Hunan, 410208, China
通讯机构:
[Peng, Y.-Y.] S;School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, Hunan, China
关键词:
Broadcast Storm;Internet of Things;Low Energy-Consumption;Routing Protocol
摘要:
How to make large-scale dynamic IoT devices transmit data to other devices via the Internet is an important topic in IoT research. Especially at the same time to avoid radio storms, to adapt to the dynamic environ ment, and with low energy consumption and other features, but there is no practical application of the routing protocol in the real world. This article describes the GFG-L (Greedy Face Greedy for location) routing protocol is a mobile IoT protocol that can meet the above properties, is expected to solve the current dilemma and accelerate the Internet of Things to achieve the pace.
关键词:
Image encryption;LSB substitution;ROI-based;Reversible data hiding;The medical image
摘要:
A novel ROI-based reversible data hiding scheme in encrypted medical images is proposed. Firstly, a content owner partitions an original medical image into the region of interest (ROI) and the region of noninterest (RONI), and then encrypts the image using an encryption key. A data-hider concatenates the least significant bits (LSB) of the encrypted ROI and Electronic Patient Record (EPR), and then embeds the concatenated data into the encrypted image by LSB substitution algorithm. With the encrypted medical image containing the embedded data, the receiver can extract the embedded data with the data-hiding key; if the receiver has the encryption key, a medical image similar to the original image can be obtained by directly decrypting the encrypted medical image; if the receiver has both the data-hiding key and the encryption key, the embedded data can be extracted without any error and ROI can be losslessly recovered after extracting the embedded data. (C) 2016 Elsevier Inc. All rights reserved.
作者机构:
[李鹏; 王建新] School of Information Science and Engineering, Central South University, Changsha, 410083, China;[李鹏; 丁长松] School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, 410208, China
通讯机构:
School of Information Science and Engineering, Central South University, Changsha, China
期刊:
Revista Tecnica de la Facultad de Ingenieria Universidad del Zulia,2016年39(11):1-9 ISSN:0254-0770
通讯作者:
Liu, Wei(weiliu_china@126.com)
作者机构:
[Liu, Wei; Huang, Xindi] School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha Hunan, 410208, China;[Hu, Zhigang; Xian, Weicheng] School of Software, Central South University, Changsha Hunan, 410075, China
通讯机构:
School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha Hunan, China
关键词:
Conditional complexity;Cyclomatic complexity;Identification of refactoring opportunities;Software metrics;Statistical analysis
摘要:
In order to identify the high conditional complexity in source code, a novel approach based on statistical analysis is proposed. According to the statistical analysis of two software metrics which are Method McCabe's Cyclomatic Complexity (MMCC) and Method Average McCabe's Cyclomatic Complexity Per Code Line (MAMCC) in a large number of projects, the probability density functions and cumulative distribution functions for describing distributions of these two metrics are obtained. Moreover, a model for identifying high conditional complexity is built by choosing reasonable threshold of these metrics, and this model can be used for preliminary screening the methods which have high MMCC and high MAMCC. The experimental results show that the proposed approach can identify some candidate methods which need to be refactored accurately.
作者机构:
[丁长松; 王志英] College of Computer, National University of Defense Technology, Changsha, 410073, China;[丁长松; 梁杨] School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, 410208, China
关键词:
网格计算;协同预留;QoS满意度;预留策略
摘要:
针对动态网格资源服务的不确定性问题,提出一种可量化分析资源服务QoS (quality of service)的多资源协同预留策略.该策略基于对运行在资源上的网格任务QoS指标分析,得出QoS满意度量化、归一化方法,建立资源服务QoS与预留容量之间的函数关系,并以市场经济环境为背景,分析任务费用约束下资源价格与预留容量之间的关系,求解得出可均衡负载的多资源节点协同预留方案.理论分析给出了策略的有效性证明和算法,仿真实验采用真实网格系统中的任务负载信息作为实验负载,在较大规模的模拟网格系统中检验了所提出的预留策略的性能表现.实验结果显示,该策略在接纳任务数、资源利用率和任务违约率方面的性能表现显著优于传统的预留策略.
作者机构:
[李鹏; 王建新] School of Information Science and Engineering, Central South University, Changsha, 410083, China;[李鹏] School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, 410208, China
通讯机构:
School of Information Science and Engineering, Central South University, Changsha, China
作者机构:
[丁长松; 王志英] College of Computer, National University of Defense Technology, Changsha, China;[胡志刚] School of Software, Central South University, Changsha, China;[丁长松] School of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, China
通讯机构:
College of Computer, National University of Defense Technology, Changsha, China
摘要:
A robust zero-watermarking algorithm is proposed based on merging features of sentences for Chinese text document authentication. In the scheme, a text is first segmented into sets of sentences, where a semantic code for every word can be obtained. Then the sentence entropy is calculated by the frequency of semantic codes, and the sentence relevance is calculated by the semantic similarity between words through the tree structure of words in Tongyici Cilin. By employing the sentence entropy, the sentence relevance, and the sentence length, a weighting function is used to obtain the final weight of each sentence. The nouns and verbs of the high weight sentences are selected to construct a watermark, which is encrypted and registered with a trusted third party called Certificate Authority (CA). To resolve disputes, the similarity between the watermark generated from the suspicious text and the watermark from CA is calculated. The experimental results show that the proposed algorithm offers better performance in terms of imperceptibility and robustness than other available algorithms.
摘要:
Communications via instant message tools have become increasingly popular in people's daily lives. However, one of the main issues in communication is the transmission of secret information. There are many methods for covert communications. In this paper, a novel text steganography method in chat is proposed which utilizes emoticons and interjections. Due to the tremendous numbers of emoticons and interjections used in many chat tools, the pre-shared sets of emoticons and interjections can be enlarged as required. Then, through selecting the emoticons and interjections of corresponding encoding in different locations, secret information can be embedded into the chat text. Experiment results demonstrate that the capacity and embedding efficiency have better performance than other text steganography methods in chat.
期刊:
Journal of Computational and Theoretical Nanoscience,2015年12(10):3658-3661 ISSN:1546-1955
通讯作者:
Peng, Yingying
作者机构:
[Peng, Yingying; Li, Man] College of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, Hunan, China;[Li, Kenli; Peng, Yingying] College of Information Science and Engineering, Hunan University, Changsha, Hunan, China
通讯机构:
College of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, Hunan, China
关键词:
Cluster;Data Mining;Improved K-Means
摘要:
K-Means algorithm has been researched adequately in recent years. Clustering result of traditional K-Means algorithm is affected by the choice of initial point and noise. In addition to, traditional K-Means algorithm only favors clusters with spherical shapes and similar sizes. A novel K-Means algorithm combining K-Means algorithm and KNN algorithm called KK-Means is proposed to solve these weaknesses in this paper. Experimental result shows that KK-Means algorithm has better performance more than traditional K-Means algorithm.
关键词:
reversible data hiding;medical images;ROI-based;prediction error expansion;sorting
摘要:
A novel ROI-based reversible data hiding scheme is proposed for medical images, which is able to hide electronic patient record (EPR) and protect the region of interest (ROI) with tamper localization and recovery. The proposed scheme combines prediction error expansion with the sorting technique for embedding EPR into ROI, and the recovery information is embedded into the region of non-interest (RONI) using histogram shifting (HS) method which hardly leads to the overflow and underflow problems. The experimental results show that the proposed scheme not only can embed a large amount of information with low distortion, but also can localize and recover the tampered area inside ROI.
期刊:
Computer Modelling and New Technologies,2014年18(12):245-249 ISSN:1407-5806
通讯作者:
Peng, Yingying
作者机构:
[Ren, Xuegang; Peng, Yingying] Department of Management and Information Engineering, Hunan University of Chinese Medicine, Changsha, Hunan, China;[Hu, Defa] School of Computer and Information Engineering, Hunan University of Commerce, Changsha, Hunan, China
摘要:
Aiming at the Low-Density Parity-Check Codes, a reliability-based multibit-flipping decoding algorithm is proposed in the paper. The multibit-flipping criterion is based on the reliable bit position and the threshold in the flipping-decision (number of flipping bits) can be dynamically adjusted during the decoding process. The proposed algorithm is on the basis of the belief propagation decoding algorithm, and then can be derived from its theory. Compared with the traditional weighted bit-flipping decoder and the multi-bit flipping decoder, the proposed decoder can provide a faster converges faster convergent rate and better performances. Simulation results demonstrate that the proposed algorithm achieves a better balance between performance and complexity.
作者机构:
[丁长松] School of Administration and Information Engineering, Hunan University of Chinese Medicine, Changsha 410208, China;[胡志刚] School of Software, Central South University, Changsha 410083, China;[丁长松; 王志英] College of Computer, National University of Defense Technology, Changsha 410073, China
通讯机构:
School of Administration and Information Engineering, Hunan University of Chinese Medicine, China
关键词:
Single Nucleotide Polymorphism;tagSNPs;Genetic Algorithm;Artificial Neural Network
摘要:
Currently, many approaches have been developed to be applied in the tagSNP selection research. However, there are still drawbacks existing in these methods, manifested chiefly by high time complexity, large number of selected tagSNPs, low prediction accuracy and inefficient tagSNPs in followup study. We propose an informative SNP selection method framework based on genetic algorithm in this paper to address these problems. In this study, we separately improve the phases of informative SNPs set construction and haplotypes reconstruction. Firstly, we eliminate the large number of redundant SNPs with LD values to obtain a candidate subset with small redundancy, and then seek for optimization with genetic algorithm not only to ensure the reconstruction accuracy efficiently but also to reduce time complexity greatly. Besides, to avoid retrain prediction model repeatedly by traditional methods (e.g., MLR, SVM and so on), we make full use of BP neural network's multiple output characteristics to reconstruct all non-tagSNP at once. Thus, it significantly saves computational complexity. The experimental results show that our method performs much better than the current dominating tagSNP selection methods.