Research Article | Open Access
Pavel Pesout, Ondrej Matustik, "On a Modeling of Online User Behavior Using Function Representation", Mathematical Problems in Engineering, vol. 2012, Article ID 784164, 13 pages, 2012. https://doi.org/10.1155/2012/784164
On a Modeling of Online User Behavior Using Function Representation
Understanding the online user system requirements has become very crucial for online services providers. The existence of many users and services leads to different users’ needs. The objective of this presented piece of work is to explore the algorithms of how to optimize providers supply with proposing a new way to represent user requirements as continuous functions depending on time. We address the problems of the prediction the of system requirements and reducing model complexity by creating the typical user behavior profiles.
Ubiquitous computing represents a new generation of interaction with computers and is a promising way for users to obtain the needed services, as well as for the distribution companies to distribute their applications with lower distribution costs and attract customers for longer time period. It has emerged as a natural evolution step in computer sciences as the computer-based appliances are becoming smaller, more mobile, and more interconnected than ever before.
The aim of this paper is not to discuss the future potential of ubiquitous computing (as we believe that there are already enough articles concerning it) or to describe the general principles of computing and software engineering innovation adherent to ubiquitous computing, but rather to propose a new approach to modelling of ubiquitous computing online user requirements (further UCUR) and behavior, compute system level agreement with the UCUR provider, and reach the optimal infrastructure by user allocation among particular sources in order to fulfil the user’s needs.
The ubiquitous computing service has not been very clearly defined yet. We will therefore assume, for the purpose of our paper, that ubiquitous computing service is every regular service fulfilling ubiquitous computing requirements—that is, the service is accessible everywhere, fully integrated into everyday objects and activities and is not connected with any single type of hardware.
There are apparently some differences in the intensity of different service usage. For a company which provides more than one service, it seems to be only rational to adjust the level of services provided during the daily hours in order to optimize the usage of available resources and keep the level of services according to the client needs. We can assume that different users will require a different kind of services in different time, for example,(i)marketing user—during work time he/she uses mainly databases, storage, marketplace and email services, in his/her free time mainly web services;(ii)accountant—during work time uses mainly billing and accounting services, partly also infrastructure and databases; at the same time, it must be noted that the usage of services differs during the month, for example, the use of accounting and billing services intensifies during thanks to financial statements preparation, the use of data sharing increases during the financial audit, and so forth;(iii)school child—will use mainly web services, e-mail, and data sharing, however, in different time period than the full-time employee.
The examples shown above illustrate the differences in the intensity of service use.
In this piece of work we assume that our model company provides several different services and serves different types of users as mentioned above. This company operating model can in an optimal mode bring significant savings from the scale of the services provided; nevertheless, on the other hand it can cause some significant issues with the set-up and managing the system.
The service provider has to manage and monitor its services level provided at any time for each particular service. Based on our research, one possible way is of reaching the optimal level of the service management and monitoring lies in computing the system development and predicting its changes by the modeling of typical user behavior. Our approach is based on the possibility to model user needs as functions and then to use these typical user needs as a representative of one of the groups created by using function data clustering techniques.
The further parts of this paper are organized as follows. In Section 2, the current studies in the field of ubiquitous computing and curve clustering techniques are briefly reviewed. In Section 3, the user behavior and the attributes that influence it are described. The methods of functional data clustering used in order to create typical user behavior requirements are described in Section 4. In Section 5, the overall system requirements are computed and system changes are discussed. The algorithms of system source’s allocation are proposed in Section 6. Finally, a conclusion is summarized in Section 7.
2. Related Work
Our work is not fully concentrating on the theory of the ubiquitous computing, although it influences our research deeply. The basic ideas of ubiquitous computing are very well summarized in the work by Greenfield . The possibilities of future development in this field are named by many authors. Fano and Gershman  deal with some of them especially in the field of medical services and mobile wallet, and Yu with Guo  study the role of ubiquitous computing in retail banking.
We concentrate mainly on the ubiquitous computing online user requirements. Some authors even use the term ubiquitous cloud computing when they are writing about ubiquitous computing services, so the term cloud computing is in many aspects very close to ubiquitous computing itself. The ideas of cloud computing and its future role are very well described by Carr , who is mainly known for his comparison of IT systems to standard commodity rather than to a competition advantage. According to him, the switch to cloud computing services (and also to ubiquitous computing services) will be probably very similar to the switch from single electricity generators to the electricity grid.
In many science sources we find many classifications of the cloud computing services; very well known is the classification of cloud computing services by Cearley and Smith .(i)Infrastructure as a Service (IaaS)—basically raw compute and storage services—this option provides only an infrastructure without any software, so the modeling of the requirements is not particularly difficult.(ii)Platform as a Service (PaaS)—higher-level of development environments which abstract the underlying technology and provide for scalability and rapid application development.(iii)Software as a Service (SaaS)—classical online software provided as a service with minimal requirements on installation on user’s computer and almost all data are stored on the provider’s side.
For the purpose of this paper, we introduce our UCUR model under the category SaaS; however, we strongly believe that our model is also suitable for other cloud computing categories (even though they are not as complex as SaaS). Another view on ubiquitous computing services can be found in the work of Kim et al. , where the authors deal with the idea of Offload Socket Processing—these changes in the socket processing can be viewed as another method of resources optimization.
It is obvious that the UCUR model encompass diverse types of services. Weinhardt et al.  distinguish the following types of services: Infrastructure, Storage, Database, Business Process Management, Marketplace, Billing, Accounting, Email, Data sharing, Data processing, and Web services for Software as a Service approach (as a part of cloud computing services), and we can also apply this categorization to the ubiquitous computing services. There are of course some other services, like webgames, location-based services, and so forth, but they are generally included in the Web services category.
In our research we concentrate on searching for typical user behavior; hence, we need define and classify the clustering techniques. Clustering is a process of grouping data of similar character into the same class or same cluster. However, it is important to note that in case the data are measured as a function of a dependent variable such as time, which applies to our case (user requirements), the most frequently used clustering algorithms, such as hierarchical and partition-based ones, may not pattern each of the individual shapes properly. That is why we have to choose more sophisticated and special methods of functional data clustering that are able to attend to the whole space among the measurements and are not limited to the obtained measurements set.
Recently, density-based clustering methods using the Maximum Likelihood Estimation (MLE) have been mostly developed to recognize most homogenous partitioning of functional data. There are two types of the density-based methods which differ in their approach to the cluster memberships. Firstly, these memberships may be assumed to be some of the model parameters. These methods are called the Maximum Likelihood Approach (MLA) methods. They are thoroughly described by Fraley  and Banfield and Raftery . The classification is a twofold one.(i)The likelihood is analytical or approximately maximized over the jointed parameters.(ii)By the use of estimations the likelihood function criterion is maximized over the cluster memberships.
Secondly, it can be assumed that the cluster membership is a random variable, and, hence, the mixture models will be used. Due to the height and the criticality of computational cost of finding the global minimum, only the local one is looked for through the application of the Expectation-Maximization (EM) algorithm described by Dempster et al. . The classification is processed in the following way.(i)The cluster memberships are iteratively estimated.(ii)The jointed model parameters are estimated using the membership probabilities.
Nevertheless, most of these methods are almost unusable for our model as their efficiency decreases if a considerable variability exists within each subpopulation or group, which can be expected when dealing with user requirements.
It is very important to keep the possibility of an individual to differ partly from one or more characteristics of his or her group yet still exhibit the underlying behavior that distinguishes this group from the rest. In fact, there are only two suitable basic methods which solve the problem of atypical data sets:(i)untraditional including of estimated regression coefficients into the -means algorithm innovatively proposed by Tarpey , (ii)the random effects regression mixtures with a hierarchical model with a mixture on parameters at the top level and an individual-specific regression model at the bottom level which were studied by Gaffney and Smyth .
3. User Behavior
The form of our solution to the allocation problem is largely influenced by several issues which we are confronted within the process of the UCUR model establishment and management. Firstly, different ubiquitous computing users have different requirements for provided services, and their demands are placed at different time during the day. Secondly, we have to face the logical problem of task’s backlog with large number of users. A purely individual approach to each of them cannot be assured. Therefore, we propose to modify the tasks by the use of modelling a much lower number of typical behavior profiles.
Let now be the number of ubiquitous computing users. For each we assume there is a sufficient history of their requirements’ measures. These measures have to be assigned to several monitored attributes in the solution. Basic attributes characterizing the user needs are at least five:(i)type of requested service (web, data based, accounting, billing, etc.),(ii)requested memory for the service,(iii)requested computing power (both CPU and graphics card),(iv)hard disk space (for the operations and for saving of the results as well),(v)claims on the line capacity for the connection between the service provider and its users.
However, generally it is possible to include even more attributes, for example, the speed of response. This can increase the model complexity.
Let be the number of attributes included in our model. In our analysis we have identified some underlying premises.(i)The measures have to stem from the aggregations of requirements from previous time frames so that the period between two data points is included therein.(ii)Number of measures has to be relatively large with only small time differences as the user access to services may change any time.(iii)Due to the restrictions included in the problem we assume the measures in the form of averages from a longer period of time (e.g., a month) rather than from a shorter period of time (e.g., one day).(iv)Due to different nature and needs of different attributes we consider various frequencies of measures.
Monitored functions are determined by many variables; some are more critical (e.g., connection time, bandwidth, user habits, day of week) than others (speed of computer control, etc.). In our model, we assume that user access to services within one time zone follows the biological needs and work rhythm.
Given the definition of measures mentioned above let be for the th ubiquitous computing user, , the th attribute, , and its -length data points represented by the vector .
However, these vectors are not sufficient if we want to understand the user requirements thoroughly by all means. In our opinion, it is more useful to model user behaviour as a continuous function depending on time which reflects its actual demand (behaviour). Taking into account this consideration we see as a vector of some function data points’ measures.
To define a profile of typical behavior we have to deal with the clustering of tasks which we can build on the averages of overall user requirements classified into one cluster. Our aim is to find a partitioning into some certain number of groups so that objects in the same cluster have high similarity when considering their shapes and, at the same time, objects in the different clusters have lower similarity.
4. Typical Users’ Profiles
To be able to define a typical behaviour profile we need to focus on the clustering techniques. As we have mentioned in Section 2, we may use both—random effects regression mixtures with a hierarchical model and plugging estimated regression—coefficients into the -means algorithm according to functional data character.
The basic advantages of the -means algorithm, which is in fact a specific case of the EM approach, are that it is a computationally less demanding solution and does not neglect naturally periodic character of measures. Moreover, there are also other reasons why we should consider the use of it.(i)All prerequisites of the algorithm are satisfied—trajectories belonging to one attribute are measured in the same data points and have the same length (e.g., one day).(ii)Measurements are naturally periodic (days in a week)—we are looking for the solution that does not neglect this fact.
The -means algorithm iterative relocates, see Hartigan  for more details, the objects into clusters; let us suppose , by minimizing the within-cluster variance where is the considered distance and represents the class centroid. However, in our model we consider functional data clustering. Following the Tarpey’s research, an individual regression model can be used to estimate the functional responses at finite number of time points where is a vector of regression coefficients, is a design matrix determined by the choice of basic functions and evaluated at , and is a vector of random errors. The estimated regression coefficients can be gained by using least-squares Therefore, a natural way to cluster curves is to apply the -means algorithm to the elements of the matrix
According to the classification, we classify the trajectory and the requirements of the th user in order to arrange the th attribute into one of the clusters.
It is obvious that the choice of the interpolation model is an important part of the model.(i)Clustering results can differ depending on how the curves are fit to the data.(ii)Clustering results can differ depending on how the data is weighted—for example, using the cubic B-spline and natural cubic spline, despite the fact that the fitted curves are identical, the clustering can yield different results.
Our primary question of interest is which interpolation method to use. With regards to the periodic data character, we propose to include Fourier interpolation.
Let us denote , and assume even number of measurements (typically 12 daylight hours). For each object we are now looking for function where We also project the data observed in into the interval , and for each curve we obtain a set of parameters (even though two of them are zero). This is an important advantage since the number of measures does not increase and does not complicate the computational cost.
However, the -means algorithm has one important disadvantage—the number of clusters has to be given in advance, which is not needed by the random effects regression mixtures model thanks to the ability to use the Bayesian Information Criterion (BIC) proposed by Schwarz . What is more, the process is an iterative one which leads to the ability to choose the best result by setting the initial parameters and the rules of committing the individual steps. Finally, the trajectories do not have to be measured in the same data points and the measures belonging to different time series do not have to be of the same length (although, as we have mentioned above, these requests are not necessary parts of our model case).
Mixtures model uses interpolation (4.2) again. Let be the parameters of this data level which allow us to model the individual trajectory behavior. We may assume the th conditional distribution taking the form (for the th attribute). At the top level of the hierarchy, there is another model that describes the distribution of the parameters of each individual. Let be the parameters at this level where is the probability that an observation belongs to the th cluster and are the parameters of the distribution on according to the known group template .
That is why the unconditional class membership for is a finite mixture model Since we know that and the equality , we may use the maximum-a-posteriori-(MAP)-based EM algorithm in order to produce consistent parameter estimates.
The EM algorithm consists of two steps. In the first E-step, the expected value of the complete-data MAP function is taken with respect to the posterior condition which is prior of the cluster memberships. We also evaluate the expected value of given and , where if is a member of the cluster and otherwise, and we set the membership probabilities using in the previous M-step updated parameters to In the M-step this expectation is maximized over the parameters and . The complete-data MAP objective function is for the set of for all and of all given as
This yields the following form of the EM-based algorithm:(i)randomly initialized membership probabilities ;(ii)calculated estimates for and ;(iii) and are made contemporary;(iv)loop to point 2 and its repetition until the expected value of the complete-data MAP function stabilizes.
Of course, the conditional distribution must be fitted with some real-world distribution and we recommend normal mixtures with Gaussian error term.
The main advantage of this method is the individual approach to each trajectory that is modelled by Eigen function and is allowed to be regulated by a parametric manner. It is also a very effective method in case considerably large innercluster variability exists in the model. The partitioning achieved by the use of the -Means algorithm or random effects mixtures, may be represented by the vector , where if is a member of the cluster and 0 otherwise.
Individualy fitting functions from interpolation model (4.2), let us denote . By setting this, it is possible to compute the looked for typical requirements’ profiles as the averages of all objects that belong to the cluster
More transparently, the functions should be transformed into nonaggregated forms denoted as . We have done some experiments in a general manner, and we can demonstrate some results of this kind of clustering technique in Figure 1. There are many users’ requirements represented as particular curves and classified into three clusters. Bold curves display the profiles .
Note that all figures in this paper are only from experimental measurements and are not based on the real data.
To summarize the proposed process, the results are the following.(i)The requirements of the th user to the particular hth attribute are modelled as the functions .(ii)Each function is classified into one of the clusters, which is identified by values.(iii)Analytical view may be considered for only small number of the typical behavior profiles computed thanks to partitioning of original objects.(iv)User individualities boil down to values.
5. System Change and Overall Requirements
Modern UCUR is built on the premise that the needs of the enrolled users are constantly changing and evolving. However, this fact can be easily integrated into our profiles of typical behaviour.(i)If the th user decides to withdraw from the system, then it will be sufficient just to remove (subtract) all the curves from , where . Recalculating the inclusion of other users is not necessary, assuming the high number of them.(ii)If we identify a new th user, we will set its measurements and will allocate them into one of the created clusters by using modified Fisher’s canonical discriminant analysis, see its basics described by Fisher , in case that the -means algorithm is used by clustering problem or discriminant functions (quadratic or linear discriminant scores) in case of random effects mixtures, see Ramsay and Silverman  for more details. Modification of the canonical discriminant analysis that we have specified consists in involvement of the estimation of the new requirement’s regression coefficients using the model (4.2) of Fourier interpolation and least squares instead of original measurements—it allows us to take into account the data functional character and to reduce the size of the task by detecting the most important data time points for classification.(iii)If we identify the change of needs of the th user on the basis of the user complaints or explicit user behavior observation (or arbitrary correction), and we will remove all the curves from , where , we will update and allocate it into one of the clusters in the same manner as if it were a new observation.
When there is the ability to reflect new or changing requirements by reallocating them into existing groups, we may focus on the detection of summary system requirements. Knowing the individual partitioning and profiles allows us to give the summary th attribute’s requirements as the function
The UCUR provider has now precise information about requirements on its system in any point in time, which helps to prevent the overwhelming system’s sources. Nevertheless, to be able to make a decision about the approval or rejection of the new user application, it is necessary to know firstly how the sources are heavy and how the current requirements are allocated.
6. Requirements Allocation
To define our algorithm of requirements‘ allocation, we assume that the UCUR provider disposes with Ω sources (servers) of certain capacities on the th attribute evaluated as , . The issue of requirements’ allocation can be seen as a version of some clustering problem. We also propose the following process inspired by both the partitioned-based and agglomerative clustering algorithms.
Step 1. Choose randomly Ω user requirements identified with indexes and allocate each of them to one source/cluster that represents the th server with the fulfillment of the capacity condition , where . Set the number of allocated requirements .(i)Denote by the actual charge of the th source, so now it is equal to , where .(ii)Denote by the set of all indexes which belongs to the allocated requirements, so now only all .
Step 2. For all sources and their charges we compute the cost/failure which is reached when adding particular number of profiles. We understand the cost as a state opposed to the optimal status. The most effective allocation is given by constituting the most constant requirement function as possible at any point in time for every attribute. Then we set for the th attribute and weighted interval the matrix of the elements representing the possible cost: The matrix determines the possible system costs/failures according to user’s behavior. An example of this is illustrated in Figure 2. Bold curves represent typical profiles and , a dashed curve shows the summation of these curves, the constant function is the maximum of its summation, and the filled area is the cost/failure .
The least values of indicate the most optimal combinations of allocation. However, there are a lot of user requirements’ classifications given by vectors , also the sum of all combinations must be computed. We have recalculated only the elements belonging to the sources which allocation was previously changed.(i)For the first requirement that has not been allocated yet, let it be the th, not in , and intend the cost of its allocation to with regards to all attributes where are the weights. The weights may be excided if all attributes are of the same importance, otherwise it is favourable to include them (their values depend on empiric UCUR provider’s setting). Allocate the th requirements to if with the fulfillment of the capacity condition , where . Increase the number of allocated requirements . If the condition is not met, allocate it into the most optimal (according to ) source that meets that condition (denote it again ). In case such source does not exist, the requirement cannot be acquired.(ii)Update the actual charge of the th source, so , where , and for the changed source , where .(iii)Set .
Step 3. Return to Step 2 until there are some nonallocated user requirements which have not been detected yet as unacceptable because of objectively limited source capacity.
In the age of ubiquitous computing it will definitely be needed to discover a new way to provide the services to customers, where both provider and customer benefit from simple and attractive revenue logic, which is no longer based on the application development investment and makes possible the achievement of the level of functionality, flexibility, and time to market required by users. On the other hand, the UCUR model ultimately changes the provider-customer relationship from one-to-one to one-to-many and to a typical-utility-based ecommerce relationship with a very crucial need to understand user’s behavior and the effective algorithm of requirements’ allocation. The model of the usage must be precise enough since the initial failure in the service delivery may result in the user‘s mistrust and the overcalling of service‘s hardware and software may result in high costs charges to the consumer or in the loss of the owner.
The primary motivation of this paper was to define the novel algorithm of user requirements’ allocation between available sources. In order to achieve this aim we have introduced a new way to represent individual user needs as functional data measured in time for each of the attributes included. We have been focusing on finding the typical user behavior profiles by solving functional data clustering problem. Hence, we have integrated the linear regression into the -means algorithm and dealt with the random effects regression mixtures with the EM algorithm and parameters at two levels. We have discussed the system development and changes of actual user demand, and we have demonstrated how to compute the overall system requirements. Based on executed research we are finally able to define the innovative algorithm that leads to the optimisation of UCUR sources’ utilising and user allocation.
- A. Greenfield, “Everyware—The dawning age of ubiquitous computing,” Tech. Rep., New Riders, 2006.
- A. Fano and A. Gershman, “The future of business services in the age of ubiquitous computing,” Communications of the ACM, vol. 45, no. 12, pp. 83–87, 2002.
- J. Yu and Ch. Guo, “An exploratory study of appluing ubiquitous technology to retail banking,” in Proceedings of the Academies International Conference, pp. 7–16, Academy of Banking Studies, Tunica, Miss, USA, 2008.
- N. Carr, “The big switch' to cloud computing,” http://www.computerworlduk.com/.
- W. D. Cearley and M. D. Smith, “Cloud computing services: a model for categorizing and characterizing capabilities delivered from the cloud,” in Gartner Research ID: G00163913, 2009.
- S. Kim, S. Kim, K. Park, and Y. Chung, “Offloading socket processing for ubiquitous services,” Journal of Information Science and Engineering, vol. 27, no. 1, pp. 19–33, 2011.
- C. Weinhardt, A. Anandasivam, B. Blau et al., “Cloud computing—a classification, business models, and research directions,” Business & Information Systems Engineering, vol. 1, no. 5, pp. 391–399, 2009.
- C. Fraley, “Algorithms for model-based Gaussian hierarchical clustering,” SIAM Journal on Scientific Computing, vol. 20, no. 1, pp. 270–281, 1998.
- J. D. Banfield and A. E. Raftery, “Model-based Gaussian and non-Gaussian clustering,” Biometrics. Journal of the Biometric Society, vol. 49, no. 3, pp. 803–821, 1993.
- A. P. Dempster, N. M. Laird, and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” Journal of the Royal Statistical Society. Series B, vol. 39, no. 1, pp. 1–38, 1977, With discussion.
- S. Gaffney and P. Smyth, “Trajectory clustering with mixtures of regression models,” in Proceedings of the 5th ACM SIGKDD International Conference on Knowledge discovery-Data Mining, pp. 63–72, ACM Press, New York, NY, USA, 1999.
- T. Tarpey, “Linear transformations and the -means clustering algorithm: applications to clustering curves,” The American Statistician, vol. 61, no. 1, pp. 34–40, 2007.
- J. A. Hartigan, Clustering Algortihms, John Wiley & Sons, New York, NY, USA, 1975.
- G. Schwarz, “Estimating the dimension of a model,” The Annals of Statistics, vol. 6, no. 2, pp. 461–464, 1978.
- R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of Eugenics, vol. 7, pp. 179–188, 1936.
- J. O. Ramsay and B. W. Silverman, Functional Data Analysis, Springer Series in Statistics, Springer, New York, NY, USA, 2nd edition, 2005.
Copyright © 2012 Pavel Pesout and Ondrej Matustik. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.