Abstract

A mathematical heuristic model was proposed to analyze the flow of information in administrative workflows. The model starts from a conceptual analysis from the perspective of probabilistic systems, information theory, and information entropy. The main parameters of the analysis are to identify theoretically a workflow as a hybrid dynamic system where the probabilistic distribution of the information, the time of information processing, and the precision with which the workflow is executed are caused by the cognitive performance of agents within a complex adaptive system. The model of analysis provides support for the search for empirical evidence in workflow investigations, highlighting the presence or absence of agent ad hoc methods and their influence on firm’s productivity.

1. Introduction

This research was conducted for the administrative works at the Sectorial Administrative Group (GAS), which is one of the largest sectors of the Secretary of State for Education and Sport in Paraná (Brazil), using a methodology that allows the manager to understand critical points associated with routine administrative work. Since it is a large public organization, responsible for more than 2000 public schools in the State of Paraná, the amount of information and administrative procedures to keep the databases and management of all schools, administrative units, collaborators (agents), students, state and federal government programs/projects etc., updated are the particularities of human organizations that increasingly demand knowledge in workflows to be flexible from the epistemological, administrative, and the scientific point of view [13]. The proposed analysis methodology allows the creation of workflows that can support several other existing methodologies and has the effect of adding greater chances of achieving efficiency and effectiveness in the management of production and precision for the supply of goods and services, whether public or private firms.

An administrative work is characterized by the people, machines, and software that compose it, the work being able to process information that is of physical, digital, and human nature (objective and subjective experience/natural language) [36]. In order to obtain a better performance in the management of administrative procedures, it is necessary to identify the characteristics of these three types of information processed in the administrative routine.

Regarding the epistemological nature of the information, it is easier for agents to systematize objects of a physical nature because it is a more common human action on a day-to-day basis and does not depend on issues related to a digital culture or socioeconomic conditions while objects of a digital nature can be systematized in a conditional situation [1, 3, 7] in which the user of systems, software, workflows, etc., must have prior knowledge to use these tools, which in many cases greatly restricts the scope of action of an agent in function of its technical or academic formation as well as socioeconomic and cultural conditions pertinent to it.

In many cases, information of a physical nature can be accessed without missing information about how to perform a given procedure, for example, a flowchart shows that stock goods are replenished on the shelves; in this context, even if it is accompanied by a possible decision-driven software that accuses the need to replace the type of merchandise, both natures of physical and digital information are theoretically explicit to the agent without any missing information that in normal cognitive behavior is unable to be understood by the agent. However, even an activity of this nature has its own variation in cognitive dimension [1, 2, 8, 9]. Mostly, structural organization emphasizes methods considering only physical aspects of routines (discrete variables) such as products properties or services quality, both identifiable with several methodologies nowadays. But for a hybrid organization, subjective work (agents) cannot be definable by such methodologies and that aspect is one of the main points in this research. Nowadays, general and famous international industries’ methods of workflows consider mostly discrete variables only as the basis of production. But several other types of industries or labor activities demand continuous variables to be defined within a method. Following this path, this article unifies several gaps in the science regarding productivity in modern contexts of human interactions with nonphysical environments [3] or, in other words, hybrid structural organizations where nonphysical information is the main component of productivity.

A very important point that is believed to be more critical in the treatment of administrative routine processes [1, 3, 5, 6] is, in addition to the administrative procedures already formalized in a firm, in the knowledge acquired by the agent’s experience, which is individual and conditionally often nontransferable from one agent to another [1, 2, 10]. This nature of information coming from the experience is not stored or allocated in the cabinets of a room or computer notebooks. They are the type of information that is existent in the cognitive dimension [8] of the agent and that, in terms of productivity in public or private firms, constitute as instruments of greater impact on the administrative organizational production. The experience, by its critical nature of participation in a firm’s productivity, has a high cost, and its transfer to other agents takes time and does not take place in a linear way [2, 3, 6, 11, 12].

When a traditional flowchart is analyzed, the constituent elements, such as geometric figures (Figure 1), which represent the beginning of the processes, execution of activities, and decision-making based on specific parameters [5, 6], among others, give the manager an algorithm about how the administrative work occurs in logical terms within an organizational structure of well defined variables (discrete form). However, subjective elements may be associated with the reading of a flowchart (Figure 1), the so-called ad hoc elements [14, 6], which are dependent on who executes them to be passed on to an analysis of workflows. In this sense, it is possible to question among the types of information existing in the administrative workflows and how this information at the ad hoc level is visible to the process manager and may or may not bring benefits to the firm.

Several methods and approaches to understand the role of the agent in a firm and ad hoc models have been explored historically for workflow developments [26, 1419]. The objective was to obtain an analysis of the role of the agent and his production capacity from different perspectives, which could allow the incorporation of hidden information (subjective experience or not informed by the agent) of formal administrative procedures [1, 20] as an attempt to evolve or improve the performance of a particular workflow. In this way, there is no possibility of separating the intrinsic relationship between the workflow, production, and the agent [12] that has its active participation in firm’s success [1, 4, 6] and at the same time represents knowledge to be incorporated into new workflows and environment demands aiming at a constant production of excellence that could be independent of the agent itself.

Historically, a model that has been adopted in workflow analysis is the strictly empirical model, in which mathematical methods with discrete variables such as Petri Nets and Milner [2123], or other methods, analyze information with qualitative characteristics [11, 18, 24, 25] and extract indicators [18, 24, 26, 27] that can serve as a reference for a team of managers or professionals responsible for analysis of workflows or research in various fields of work. Other workflow approximations or causality-based concepts have a much more labor-intensive mathematical model [2833] and although useful for some purposes, they may be costly for firms looking for more streamlined process of workflow analysis solutions, but without spending a lot of time required to complete the analyses. These methods, in some cases, are very useful for analyzing agents and operational procedures from a previous theoretical model or an inductive/intuitive model based on empirical evidence [5, 7]. However, it does not add to these models concepts that discriminate the adaptability between organisms and objects [1, 3, 16, 2830] and the predictability of the phenomena [2, 16, 30] occurring in workflows in several professional areas, due to the presence of continuous variables [1, 3, 6, 10] in the system that are not tracked by traditional approaches.

The traditional flowcharts operate by a very large margin of information’s compensation not presented in the flow itself, leading the manager and the agents of a firm to implement methods of discussion of the problem, situational methodological analysis, and other structures that are subtracted from the workflow itself. Although these techniques are employed and succeed in many managerial cases, and although essentially inductive, these models do not visualize scopes of analysis that come from a theoretical view as methods of deduction and inference [28], leaving an empty space [1, 3, 20] in workflow analysis.

The proposed analysis of a workflow of this research seeks to give the manager the conditions to manage ad hoc situations, within a hybrid dynamic system (presenting discrete and continuous information processing), with an earlier forecast of possible difficulties found in administrative work. In this sense, the method verifies if the level of ad hoc knowledge is indeed impacting with a negative output of results or impacting positive results against highly complex workflows.

Typically, the analysis of workflows has limitations rather caused by the subjectivity of the agents [1] than objective information on work process. In this sense, for both the manager and the agent, the existing limitation prevents both of them from having a symmetry of information regarding the administrative flow of work and the way of executing them. Searching for a simple and conceptual mathematical model that can aid an information flow analysis was elaborated by a method, based on the knowledge of information entropy and probabilistic systems [11] associated with the knowledge of the science of administration and biology. These mathematical modelling allows for the manager the chance to infer from a flowchart algorithm the critical points of analysis without empirical evidence and which variables, discrete or continuous, influence process optimization, discarding excessively empirical models of analysis [5].

As a method of analysis for this modelling, a workflow can be understood as a probabilistic system containing information according to Figure 2. The diagram of Figure 2 shows a simplified representation of an observed workflow in terms of probabilistic events and information entropy. For a firm to have 100% constant production in both qualitative and quantitative terms, it would be necessary for an input of activities in which the information on production and processing by the agent could generate as an output a continuous precision in the system. When dealing with real events, the workflows naturally present variations of precision that derive from the random information present in the system and from the agent itself as a complex adaptive system (CAS).

Therefore, the number of variables that constitutes information in a system does not independently influence the final results of a workflow process because it is a correlation between the organism and objects where probability distribution is influenced by the junction between these two dimensions of analysis. In this way, the randomness or uniformity of information present in a workflow, arising from the relation between the agent and the work activity, influences the precision in the production of a good or service. Likewise, decision-making is also influenced by the presence of objective and subjective criteria in the forms of processing [1, 3, 9, 14].

The optimization of processes in the sector is very important due to new strategies of action of the public sphere in the economy and public administration of the state, since the best use of public money and natural resources arising from an optimization of administrative processes is based on a global trend of heating up the domestic economy by analyzing stochastic data from public agencies associated with new trends in economic research and methodologies used by the natural sciences [3, 17]. This makes it possible to work together among politicians, public managers, agents, private initiatives, and the needs of the population. The importance of analyzing a workflow can be verified in several dimensions of human organizations. The results of the flow analysis influence the execution of collaborative activities, the production of a firm, motivation and professional stability of agents, management of administrative processes, optimization of natural and financial resources, sustainable development, intelligent construction, and other aspects in areas of related knowledge.

2. Methodology

2.1. Ergodic and Nonergodic Workflows

Consider a workflow as a demand (D), executed (E) by an agent or a group of agents in a firm. In brief terms, an execution of labor activities consists in the use of information (I) to perform an activity at the time (T) necessary for its completion. This basic model can be represented by Figure 3. When executing an activity, all information and time will be an event i processed by the agent(s) in which variables influence expressions of the processing and they generate accuracy (P).

The diagram of Figure 3, ideally to achieve 100% accuracy continuously, can be defined by the equation where the probability of precision iswhere all events i are being determined by the discrete probability function where distribution m of I generates [34] the following:in which P conceptually presents an ergodicity [35] defined by constant measures from I to P and T (limiting average of the samples for equal spaces and times) in any event i and I, the result from a single interaction, may present frequency variations in the workflow without the system presenting oscillations in its variables and mainly in I. This definition, as a rule, is only theoretical and does not prove empirically for any agent or modes of production of a firm, except for artificial intelligence specific task programming. However, this example illustrates the ideal way of analyzing a workflow, where the production and precision remain constant in the qualitative and/or quantitative parameters in the agent/information/processing/time sequence, which generates precision ideally for a chain or web of events whose expression must be equal to the initial model for any region of the system (Figure 4).

However, the ergodicity of a system as workflows, is not a constant expression considering real-life situations. Therefore, for the purpose of empirical investigation of actual facts, it is recommended starting from an analysis in which nonergodicity is the a priori event, more present in the real world, where the distribution m assumes various forms [36].

If a stream of information arises from an event i, where individual X processes as an input state a given discrete information I generating time T and reaching precision P influenced by individual experience , then there is a probabilistic event in which the event i, the probability of precision (P) can be defined aswhere

Here, as T is defined by I given (data source) and is processed by individual X defining how information will be processed and how time will be generated as

The input values of I and generate m distributions for I influenced by and for T also influenced by as an output. can be defined as the ad hoc cognitive performances of the individual in which represents a posterior processing stage of , being this first probability the first interactions between the organism and object. In this sense, is the same event , but exponentially growing by individual experience i, i being in its turn defined by several other probabilities generated by the interactions and iterations of the organism and environment. This equation can be defined byin which P assumes m variations according to the probabilistic distribution of I, , and T, which in turn defines the information entropy of P and hence of the workflow (see m distributions [36] in Section 2.6 for more information). Also, equation (6) can be written in the same way [36] asdue to conceptual description mentioned before. It is necessarily understood that the time occurs only if the information is processed, so the probability of time is dependent on the probability distributions of the information and the processing. However, the probabilistic distributions of I and T assume behavior in a sample space that does not have fixed intervals, since they come from complex adaptive systems [37] and with a degree of freedom for any resultant that varies from individual to individual [1]. In this way, it is possible to assume that every learning process as well as cognitive information processing derives not from a predefined sample of information given. This feature leads to the potential of apprehension every organism might express in terms of semiotics and information processing skills. For this reason, it can be identified as a probabilistic event defined by the data source (I), individual experience () as input values and time of processing (T), and resulting information (I) as output values. Thus, it is not previously defined that there are probabilities of I and T, but that the probabilities are due to the dependence between these variables and their empirical expression, which identifies the phenomenon itself as the product of nonlinear dynamics [3] by cumulatively modifying their probabilistic distributions as the more complex adaptive systems (CAS) interacting with the environment [37, 38].

Therefore, when investigating a workflow, the agent’s processing mode is an empirical variable that can be conceptually (mathematically) understood and this information processing influences the execution time of the activity, as well as, generates an important variable of the system, the so-called ad hoc working execution circumstances (). A conceptual mathematical model of workflows thus allows us to understand the processing mode of an adaptive system that is produced by subjective nature and allows the identification of the probabilistic distribution of variables I and T, as well as the presence or absence of ad hoc working models as triggers of the event performance.

In relation to the nature of ad hoc expressions, the mathematical models of probabilistic systems, information theory, and information entropy identify how information flows in CAS [14, 3941], associating in this context concepts of the biological dimension as a source of inference about the ad hoc variations and the way information is processed. It is understood that ad hoc skills or inefficiencies can be observed in the overall analysis of the entire workflow, with a view to the entropy state of accuracy and time the information is processed. In this way, the analysis of the flows does not limit the actions of the agents as their form of personal organization but points out as criticisms of the ineffective performances and methods of work.

2.2. Data Source

As for information processing, the information sources influence the processing and the precision in the execution of an activity or decision making. In relation to information (I) and its forms of expression in work environments, the administrative workflow requires, whether physical, digital, or individual experience, that data (information) are accessible to all agents. The classification of the presence or absence of data in the work environment and/or individual organization takes into account only the existence of this information, discarding investigations about the nature’s properties of the information for this research. The data sources required for information processing are classified as follows [11, 13, 16, 42]:(i)Default: existing data in full(ii)Partially defined: data in parts(iii)Undefined: information exists, but its use and other logical connections of application have not been defined(iv)Not available: no information is available

Based on the available data sources for information processing, a hybrid probabilistic system of a workflow necessarily assumes dimensions in which the precision of the flow is influenced as to the form of information processing that can be defined as discrete variables, where predefined information does not present variances imperceptible to the agent. They are presented as discrete because they are quantifiable and/or objectively qualified for any individual. Therefore, the processing of this type of information implies the possibility that there is always a chance that all the events that constitute the execution of activities or decision making are supported by a linear dimension of analysis by the agent or other active subjects of the process. And continuous variables, as the information that is not predefined, that is, are partially defined, not defined, or nonexistent. They are presented as variables in which the processing takes place by nature, in a subjective way, leading to consider the information in this sense as a perception of the agent about the work environment, the external support of work, and the execution of activities of the workflows [2, 9, 11, 43].

It is important to note that continuous variables can occur even with predefined information due to the nature of information processing and cognition. Therefore, information in a continuous state is presented within workflows as one source of alternative development of ad hoc methods by agents [41], and it follows the same way for discrete variables. In many cases, agents produce information systems that are much more effective than existing ones or can also produce truly disastrous processes.

The importance of ad hoc methods in work environments, whether in the scientific, technical, industrial, or administrative areas, is not ruled out. Assessing a workflow by noting the presence or absence of ad hoc working methods is important for checking points in the work organization that can be improved or replaced by these methods when they are more effective than the existing ones.

Both discrete and continuous states of information can generate unstable probabilistic distributions to the precision in the execution of activities, since they are configured as a nonergodic and complex adaptive system, in which all results, depending on the workflow, can be unpredictable and loaded with uncertainty.

A traditional flowchart of a firm is shown in Figure 5 [44]. Some questions are important to know the flow of information at any given workflow, for example, What is the execution time of the activities? How much information do you need to make decisions? Do time, decisions, and information processing influence productivity in qualitative and/or quantitative terms? Do these variables define the flow? And what objective information can we draw from these variables for inference about the process system?

It is important to point out that the variables mentioned in this research, such as time, production, and information, assume in the workplace a nature of the subjective experience of agents, in which organizational decision and/or control criteria such as time, difficulty in performing an action, and internal and external dynamics of the process are intrinsically linked to the psychological and cognitive dimension of the subjects [8, 4547], so a methodology that seeks the objective data of the processes prior to the judgments are of great use. The objectivity of mathematical concepts brings more realistic margins of analysis that allow managers to identify causes without subjective element in qualitative researches.

2.3. Time Analysis

The analyses of the workflows of this research are classified according to the physical aspects that constitute the flows, biological aspects that operate the flows, the time in which each stream starts and ends, and the availability of the information processed in each step of a flow (Figure 6). The steps can be identified in Figure 6.

In order to analyze the time frequencies with which a workflow is carried out, a classification was made based on a minimum period of work schedule of 40 hours/week. Measures lower than this period do not correspond to the time of production of some extensive administrative and procedural administrative routines analyzed in the sector. Thus, analyzed flows were observed for the time required to occur at least one iteration or interaction, as well as at least a single total production of all stages of a given flow. The classification of the workflow periodicity was defined according to the data from which the research was carried out, such as(i)Frequent: occurs weekly(ii)Occasional: occurs monthly(iii)Rare: occurs per year(iv)Very rare: occurs in years (indefinite period, but there is occurrence)

It is necessary to distinguish time analysis of events described in this Section 2.3 from time generated from information processing that was discussed in Section 2 methodology.

2.4. Mathematical and Physical Aspects of Workflows

In relation to the mathematical and physical aspects that constitute a workflow (Figure 7), some important parameters that are the base of the algorithmic structure of the flows were considered and can be briefly classified as follows [11]:(i)Interactions: sequence between stages of the flow. It encompasses the whole set of actions or elements that constitute each stage (geometric figure) of the flowchart.(ii)Iterations: stages in which there are repetitions caused by errors in the accuracy of the information along the flow and/or operational procedures that are part of the organizational routine. They can be considered in two forms: Geometric: Consider a number of attempts (events) that are necessary to arrive at a certain result. This path, in case of being continuous and determining its expression for any time and event, can also be defined as being a geometric variable. In this sense, iterations assume a geometric character insofar as the paths of repetition are the same as those of their origin [48]. Nongeometric: If new information enters the system, the number of events necessary to reach the same goal will vary and, as a result, it will be constituted indefinitely as more information enters, as nongeometric variables [48].(iii)Quantity of information: all types of information, whether physical or digital, processed in the operational stages of the flow through distribution, storage, and processing (see results section for more information).(iv)Pathways: paths generated by operating procedures.(v)Probabilistic systems: information flow and distribution of probabilistic events resulting from the interaction between agent, information, and processing.(vi)Time: periodicity of flows, frequency, time of execution of activities, processing of information between stages, etc.(vii)Entropy: accuracy with which the information is passed on due to its probabilistic and operational composition (final flow analysis).(viii)Mutual information (MI): information that sits between the sender and receiver in terms of accuracy (complexity).

In relation to time as a property inherent in a workflow, it is assumed that nonergodic flows can also be originated by tiny time spans (microsystem) between frequency of iterations and interactions of information flows among systems. In this way, it is possible to regulate over time the frequency of information flows and thus reduce the nonergodic effects of the macrosystem as a whole [14, 33, 49, 50].

The time assumes the probabilistic dimension from the discrete to continuous information. Thus, it is possible to distinguish two moments in which the time variable influences a workflow. Knowing the distribution m of information, one moment has the objective as the distribution of time without needing to evaluate the time as something that promotes dynamics as a variable to be considered as the propeller of the event itself. So time does not define the flow, but the information and its processing define in that first moment.

However, as a second moment, it is important to realize that time has specific attribution in workflows. Because of an adaptive system between the agent, information, and processing, times are products that do not follow a pattern of recurrence among collaborators. Thus, the same workflows reaching a determined and constant precision of 100% did not present determined times, respectively. The expression of time can thus be characterized geometrically [48, 51] or nongeometrically [48, 50]. That is, time-dependent events as a sequence of activities along an intersectoral flow or among agents and that have regular intervals of expression can be defined as geometric in their production, although in practice, this type of system always has a margin minimum oscillation. Thus, regulating the information regulates the time associated with the processing of the information by the agent [50]. This presents an ergodic characteristic in the system either by observing time or information. Otherwise, the main point of analysis is in the nongeometric time, which can generate a great asynchrony in the sequencing between activities. An asynchrony between workflows caused by the influence of time and not of information can cause the same probability distribution effects generated by the information [50, 51]; however, methods of analysis of the phenomenon as well as regulation also do not follow the same methodologies used for the analysis of the information. In situations like this, the question is to recognize when information or time generates nonlinear dynamics. On-site analysis is required to recognize the problem.

Time can be caused by the agent and also for other reasons that goes beyond information and processing, such as, for example, indeterminate factors (psychological, emotional, work income, medical conditions, among others). Issues related to methodologies for identifying time as the cause of fluctuations in the flow of information in workflows have not been addressed in this research.

2.5. Analysis of Information Entropy and Probabilities (IEP) and the Biological Aspects in Workflows

An important aspect of analyzing the physical and mathematical structure of workflows is in the processing of information by agents. There is not only a single way to process information, and this characteristic can be attributed to the subjective way in which the prior knowledge of an individual and his work experience take on the role of creating ad hoc work models. The term ad hoc derives from Latin and roughly means an action not previously delimited [4] or provisionally defined for the execution of a given activity. A common improvisational action in CAS [37, 38] arises to the extent that organizational processes about workflows may or may not be well established and may be appropriate to new technologies, individual work methods, cognitive abilities, etc. This allows flexibility for workflow models; however, several models do not have sufficient flexibility associated with the biological aspects of workflows. Yet workflow models based on incomplete information are suggested as ways of testing hypotheses, whether for ad hoc work analyses or incomplete deterministic data [16]. An impeding characteristic of this model is the constant reorganization of the biological dimension, producing inconsistent data in a continuous way, which requires, through the standardization of workflow analysis suggested in Gomez-Cabrero et al. [16], a counterpart of efforts to keep up with the constant evolution of these types of systems.

The ad hoc working models can take on aspects of the information flow with a discrete and continuous dimension depending on the correlation between the agent and the information to be processed. It can be considered as generating fonts of continuous variables of cognitive processing, conditions, or factors such as perception, imagination, subjectivity, long-term memory (LTM), decision-making, working memory, and short-term memory (STM) among other medical or inclusion conditions, which cannot be discriminated in a predetermined manner for this research [9, 11, 14, 38, 52]. These variables may or may not be very effective in performing activities but may not be observable due to the nature of the information origin. This defines the reason for analyzing workflows as a probabilistic event and having distributions that depend on whether or not there is ad hoc and how the information is needed to perform the activity.

Among the biological aspects, some factors that promote the main point of analysis about the origin of the probabilistic distributions of information flows were highlighted:(i)Working memory/STM/LTM: Discrete variable that can serve as a nominal metric to compose the probabilistic systems of workflows and information. The working memory/STM/LTM processing limits in conjunction with the availability of the data constituting the workflow information define the variation, from the CAS, between the discrete and continuous variables of a workflow.(ii)Complex adaptive systems (CAS): Relationship between the biological conditions of the brain-mind (previous item) the operational interfaces of the information (physical, digital, and biological) that influence the processing, storage, and flow time [3741].(iii)Perception: Continuous variable in which information assumes subjective forms influencing the ability to work for both positive and negative purposes. This factor is always present in the workflows of any work area [9, 45].

Usually the organization and processing of the information count on the cognitive support and an external support. External support has historically allowed for a better intellectual development of the human being and his processing capacity, whether through writing, communication, and the historical record of human thought. Currently, information technology allows the use of external media such as text editors, calculators, drawings, Internet, virtual memory, and mobile communication. Thus, much of human development to perform complex activities is dependent on external support [52] as support for cognitive processing which by biological nature is limited when compared to performing activities without external support. In work situations in which external support is precarious and even in ideal conditions, the agent will naturally use work methods that may not be the formalized ones, the ad hoc models, which have the purpose of solving solutions to achieve the demands of work. In this way, ad hoc models are elaborations of the agent that arise and are destined to the processing of information with the existing external support. Whether by limits or cognitive profiles or by limits and types of external support, the conditions in which ad hoc working models are generated are present in any human work area, even if there are formalizations or methods historically reiterated in order to attain accuracy of certain knowledge coming from the scientific method.

Workflows, when they reach an adaptive dimension, have an aspect of refinement of the work model, caused by the attempt and error of the agent in finding the best solution to a problem. The chances of success in executing activities, whether by ad hoc models or not, are thus linked to the presence of external support in the work environment. The chances of success guarantee to the processing of information less random data and the possibility of processing variables are discrete or continuous in an objective and determined way, like an adaptive movement between the agent and the information.

The development of ad hoc work methodologies takes into account a continuous demand for information on several dimensions that often the agent does not have the support of knowledge or individual experience that can supply. In these cases, what is common is that the chances of error (noise) increase and the precision with which an activity is performed are reduced proportionally. The noises of information processing appear in this sense as potential negative ad hoc work, that is, inefficient and effective. However, while negative, it is not uncommon to find such models in human work organizations in all areas of work.

A very important point in the consideration of ad hoc models in work environments is that it often does not matter the cognitive capacity of the human resource in question, as long as external support can contribute effectively to the production of labor. That is, the absence of external support is a preponderant factor in the treatment of these administrative issues, since, once there is an inherent biological limit [38, 40, 46, 47, 52], and there is no adequate external support, the ad hoc strongly supports continuous variables, which may have an effect of inaccuracy in the production of an activity. In this sense, the procedures for inclusion of agents with special needs are questioned in ways that demand adequate external support for specific individual and/or sensory/biological abilities.

Another important point of analysis is that, for biological or operational reasons, the agent’s ability to create an ad hoc working model is also another fundamental point of analysis. For in this sense, the ad hoc working models are adaptive attempts to process the information using the individual’s own cognitive ability, together with external supports that can aid the processing, either by searching for precision or time. However, the previous experience and knowledge of the agent are essential points for the creation of work models with high positive potential.

More efficient and effective ad hoc methods tend to be generated by knowledge and experience of the individual, rather than by formalization or standardization of working methods. Investing in the basic training and improving the preexisting knowledge of agents are enabling the implementation of higher impact ad hoc models. These same discussions about human organization routines and workflows are in another way also performed for machine learning systems regarding the presence of hybrid system and mathematical modelling programming [43]. However, it is clearly visible that robot scientific investigations consider hybrid systems within the same framework as this research is for human-based agent system but differs strongly in the way in loco investigations can be performed not only for advancing knowledge about data analysis but also considering a method of analysis that suits the organism type of uncertainties and regularities, which is so far not completely compared or discovered [38, 40, 46, 47, 52] in relation to nature of object patterns.

In the proposed method of this research, IEP qualitative analysis presents two distinct stages: First, an evaluation of the information flow and its characteristics and later, the maximum information entropy that evaluates the influence of the adaptive system in the system. In this sense, biological aspects influence the accuracy with which the information is processed in view of the availability of the same, working methods, ad hoc support, time, and intersectoral relationship or between agents.

The first stage of information flow analysis is a screening of the physical and biological variables that constitute the workflow, defining the system in its gross form and highlighting the empirical aspects that support the mathematical theorization. The second stage shows the CAS [3741] as forms of organization in which the information flow becomes processed until its final stage and the biological refinement generates in the work environment the final result of the information, be that positive or negative biological effect.

It is possible to define that a biological aspect that occurs as a standard in organisms is the lived experience and the transformation of this information into probabilistic events of less random distribution. In this sense, it is argued that biological characteristics are intrinsically related to axioms of information theory and mathematical branches.

Shannon [53] in his research on information entropy can obtain an important derivation about information and probabilities, noting that the more information in a system, the less likely the event to occur and vice versa. Likewise, when one considers mathematical axioms as the effects monotonically decreasing and increasing, the logic remains the same, in which the greater the chance of an event occurring, the less information it exhibits [10, 46]. This structure of thought also supports the experience of organisms in which an increase in the probability of carrying out an activity reduces the amount of information to be processed, since through experience, cognitive processes generate a pattern of the causal relations between objects considered as sensory and cerebral-mental order [8, 11, 46]. Consider this example to illustrate the reasoning given above: imagine that a person has never been to a large metropolis like New York, and upon arriving at the place, all information regarding time, space, food, transportation, lodging, communication, commerce, etc., are information that holds the individual a degree of disorder in terms of recurring patterns that are not yet cognitively bound to anything in terms of lived experiences. As the individual lives in the place, patterns are being formed and the entire degree of system disorder begins to be reduced, maintaining relative stability as to the forms of interaction and information processing of the site [8].

This example illustrates a monotonically increasing effect of information when the individual arrives at the site and a monotonically decreasing effect is expressed as an adaptive system, in which the probabilities of performing activities from predefined information in the place are being more present and experienced by the individual, generating, in turn, patterns of recurrence among multiple variables. If we compare the events at the beginning and after a certain time of permanence of the individual in the place, the probability that activities with 100% accuracy in feeding, moving, and communicate will be greater over time than at the beginning of the event.

A complex system, as found in human and nature organizations, exhibits a lot of information, so the probability of occurrence of certain successful events occurs if it is distributed at random and thus with unpredictable behaviors. In executions of activities in workflows where success is assigned to probabilistic margins, information is needed to enable the agent to process information to achieve determined accuracy. In this sense, when analyzing a flow, it is necessary to have a metric that is based on the nonstationary analysis of the event, because it is a result of an adaptive system in which the information of the system does not stand as impartial to the individual. Thus, CAS can display many or few information as biological conditions between the organism, the object, and their causal relationships. Even with little information about objects and causal relationships, biological conditions may present oscillations for the execution of a 100% successful activity caused by factors related to the dynamics of the organism-object set [1]. This demonstrates that a probabilistic analysis and information theory in CAS [14, 37] is recommended and appropriate. Concomitant with this way of looking at the problem, workflows are conditioned to an analysis of the nondivisible interaction between the information organism or the organism-object-causal relationships between objects.

It is interesting to realize that if a workflow is executed with 100% accuracy, random information noises that the system can generate or possess are nonexistent. However, 100% accuracy is a state in which the organism inhabits the world, not wishing to say about the form of the world actually, because for each biological system in adaptive function, several results could be observed. Also, analyses of workflows cannot be obtained by strictly empirical methods from the information variable. Empiricism in this sense has its precision as a method only for descriptions of events monotonically increasing or decreasing when it comes to systems that are not adaptive, that is, that involve the presence of an organism in the probabilistic event.

Epistemologically, for the analysis of this type of interaction, knowledge types such as working memory, long-term memory and others are tools to conceive the object from the organism. These concepts also support the mathematical axioms that the more information an adaptive system exhibits, the less the ability of the organism to process them and vice versa. Thus, for the concepts attributed mathematically as monotonically increasing and decreasing, the biological aspects present themselves in the workflows as an observation of the ad hoc method of the organism and the observation of how precision is given in the execution of the activities, sustaining the precision by the idea that insofar as an activity is performed, over time, it will present a 100% probability of success, and less information of random origin either by the object or by the organism will be present in the system. On the contrary, an ad hoc methodology with low potential for processing necessarily exhibits random information that will be processed as a continuous variable of the event, associated with the biological condition of subjectivity in union with external support.

2.6. M Distributions of Information as a Control Theory of the Hybrid Workflows

Because it is a large number of variables that constitute workflows and are complex in nature [43] as a complex adaptive system, statistical analyses on the portion that occupies each category of distribution on a scale of 0 to 100% is not recommended. The nonlinearity of the complex adaptive systems [14, 3641] prevents the possibility of comparing a group of individuals with different specific and indeterminate cognitive patterns naturally and their forms of work, to generate patterns of execution of linear or nonlinear functions. Thus, probabilistic distributions and precision margins of 100% or less expressed in the workflows can be summarized in Figure 8.

The mathematical modelling that describes information flows in the workflows of Figure 8 can be defined from a given event i defined by equation (6) (see methodology in Section 2.1) of the interaction between given discrete information (data source) (I) and individual experience as input variables and defining the time T (discrete or continuous) as a function of I and (outputs) and for the execution of individual or intersectorial work/between agents reaching precision (P) where i assumes expressions as follows [36].

2.6.1. Discrete Binomial Probability Distribution

Defined by the following equationfor a single event and i.i.d (independent and identically distributed), this type of workflow presents the binary probabilistic distribution in existing information and precision defined by the low amount of information, generating a high probability of success in the execution of activities. Binary information flows, when predefined, assumes distributions as described in the previous paragraph; otherwise, they may present statistical weights that are not products of the probabilistic distribution generated by the system information but produced by ineffective administrative practices associated with hierarchy and/or motivation agent and/or interpersonal relationships among agents. The presence of weights makes the system prone to other distributions such as PDF (probability density function).

2.6.2. Discrete Probability Distribution

This id defined by the general equation:

The source of information for this workflow is predefined, generating events of the type , in which there is no oscillation as to the precision in the execution of activities defined in the workflow. It is considered that this type of flow of information is the most stable of all and ideal for any workflow when the goal is to reach 100% probabilities of performing activities. In the case of lower 100% odds, the equation is as follows:where the results are the original equation (6).

2.6.3. Discrete Probability Distribution and Monotonical Decrease

Defined by equation, , from a discrete cumulative distribution function (CDF) in which the discrete and probabilistic function is discrete by the presence of predefined data but with uncertain processing and/or temporality that assume in an adaptive system the well-defined ad hoc mode of work and predictable. The formula could be defined as

This type of distribution associated with adaptive systems does not represent an entropy of critical information, in which it is not possible to reach accuracies close to 100%. The monotonically decreasing function is presented by the high probability of accurate execution and low presence of information that generates randomness in the system. It is dependent on an ad hoc method to achieve an accuracy of 100% or close. Mathematically, it is not possible to obtain a CDF-like probability as described above as a function of the organic component of the system. Thus, the cumulative function of information as discrete values can of course be processed to the inverse of the manifestation itself in its physical nature or axiomatic origin of probabilities (a characteristic event of a nonadaptive system where ). Precisely, the propositions of biological order can establish a function between precision and information of the type . Thus, the probability of precision is strictly greater than that of information, the set of P contained in I being at the same time contained in another set of unknown dimension (of individual experience), such as, , being the experience accumulated by the individual, or in other words, the accumulated information of n events i, which confer to the biological potential the possibility that . A cumulative function in an adaptive system assumes the biological form of the individual and breaks the axiom of probabilities, differentiating the axiom that applies to the physical world from the complex adaptive world [53].

2.6.4. Discrete Probability Distribution and Monotonical Increase

Defined by equation , unlike item 2.6.3, the ad hoc mode may present as the limiting function of an adaptive system processing defined as

The main characteristics of this type of workflow are defined by the defined or partial presence of information and temporal relations, as well as the processing of the information generated by the ad hoc modality is visibly critical (vulnerable). Distributions that do not have adequate ad hoc support have a critical entropy degree as to flow accuracy. Although a working methodology can control the flow of information and time, this type of system has a high probability of inaccuracies, which evidences the monotonically increasing function in its natural axiomatic form of probability theory. This type of system presents dependence on the ad hoc method and the presence of information without defined processing and/or information that bring randomness to the system.

A system with this distribution without an ad hoc method or the same is inefficient and necessarily presents a PDF distribution that will be addressed in items e and f.

2.6.5. Continuous Probability Density Function Distribution and Monotonical Decrease

Defined by the equation of item c and analogously, , the probabilistic densities are as follows:

Assume a monotonically decreasing function but still presenting high randomness in the precision of the information flow in the system. It consists of a workflow with partially defined, undefined, or nonexistent information. The information processing is kept relatively stable (decreasing) due to the entropy limit of the system ( and ad hoc methods), which has no generation or interaction with new sources of information (I) that feed the entire system and/or steps. A very strong characteristic in this type of workflow is the presence of continuous variables as a form of ad hoc methodology producing several different results. Even if using schedules, spreadsheets, and other tools (external support), the information processing is performed in an intuitive, perceptive, and/or imaginary way, provoking evident probabilistic densities about the judgment produced by an adaptive system. A system is defined that produces information with critical accuracy (critical entropy).

2.6.6. Continuous Probability Density Function Distribution and Monotonical Increase

Defined by equation , in the same way as in item e, the adaptive system influences the accuracy from CDF-type distributions of the system strongly. However, due to the partial or undefined data source of the system defined by the following equation:which runs away from ad hoc control capabilities, the flow of data is absolutely continuous. And in the same way as in item c, for all probabilistic distributions of physical and mathematical axiomatic order, biological conditioning causes oscillations in the system, and probably, at low probability, some possibility of a successful event [53], but that does not modify the adaptive system while its classification.

2.6.7. Joint Probability Density Function

Defined by the following equationthis system presents the presence of multiple variables occurring by other adaptive systems among themselves and their events . The communication of the information suffers from imprecision by webs of workflows and individual experiences , mainly characterized by the imprecision between multiple sectors or agents. The imprecision between systems assumes diverse ad hoc methodologies and often for the same function (activity). A nonergodic system with differential entropy accuracy defined by

In the opposite hand, if , then the inverse phenomenon may occur asin the sense of reducing system complexity to a prior state of more stable organization.

The m distributions can be better summarized and classified according to Table 1.

It is worth mentioning that in time-dependent systems, regularity allows the continuous flow of information and possible interruptions caused by the exchange of information between distinct systems generating deceleration of subsequent processes. In other words, the frequency with which activities are performed is dependent on continuous flows to avoid saturation of the work steps that do not have their finalization in the appropriate time. In large information flows, PDFs can be generated on account of chaotic profiles between time-controlled systems [50, 51].

In order to summarize the features of precision in workflows and the proposed method, Figure 9 displays the variables affecting information flow and its influence on m distributions.

3. Results

Among the 39 analyses carried out in the Sectorial Administrative Group, 3 workflow analysis results were selected. These 3 cases illustrate different situations of the administrative scope of work and contemplate the theorization of the method described in Methodology.

The method of this research uses a qualitative analysis extracted from equation (6) (Methodology section), avoiding quantitative analysis. Sustained by a list of questions made with the agent responsible for executing a given activity, the answers are used to data check according to the method descriptions given in the methodology section. After the interview and observation was made, an analysis of the flowchart is made describing the main points uncovered and the proper probabilistic distributions.

Those questions were not listed in the Methodology section because it is understood that the general lines that defines the method can be carried out using different approaches. For this research, flowcharts representing the workflow algorithm were used and the list of questions was enough to track the method guidelines. Also, an IEP analysis was made in accordance with method guidelines as well.

One important issue related to qualitative research is that ad hoc analyses with agents may present inconsistencies regarding the answers obtained from a subject who sees, listens, and performs actions that can be often the subjective product of their cognitive abilities and activities [8, 9, 11, 38]. For this reason, no literal answers are collected but the main aspect of answers meaning and observation regarding methodology aspects.

Mandatory questions to perform the qualitative analysis:(1)How many agents participate in the execution of all workflows? (excluding other departments)(2)Describe the workflow in stages.(3)How much information is processed in each stage? (average rate of very high, high, or low amount of information). Evaluate it according to items 4 and 5.(4)Is the information available in full details? (for the own agent that execute activity)(5)What method agent uses to process the information (check the external supports and ad hoc actions)(6)Does the processing take too much time to be finished?(7)Does this processing achieve exact precision expected?(8)Is the workflow execution dependent of other third parties or departments?(9)Are all the information related to third parties or departments available with full details?(10)How much time does third parties or departments take to process their stages of workflow?(11)Are all the stages carried out, considering all agents and sectors involved, in a sequence of production or the stages have lines of production that are not regulated within time intervals?Case 1: monitored alarm with direct contracting.(1)How many agents participate in the execution of all workflows? (excluding other departments)Extracted answer: 1(2)Describe the workflow in stages.Extracted answer: Figure 10.(3)How much information is processed in each stage? (average rate of very high, high, or low amount of information). Evaluate it according to items 4 and 5.Extracted answer: yellow color was used to identify stages in which high amount of information is used.(4)Is the information available in full details?Extracted answer: partially defined.(5)What method agent uses to process the information (check the external supports and ad hoc actions)Extracted answer: default information and ad hoc analysis involving personal understanding of alarms and other technical descriptions. Common information among every administrative process that starts is used as pattern of decision making. (education field of agent is not the same as the subject of work)(6)Does the processing take too much time to be finished?Extracted answer: no.(7)Does this processing achieve exact precision expected?Extracted answer: partially.(8)Is the workflow execution dependent on other third parties or departments?Extracted answer: yes.(9)Are all the information related to third parties or departments available with full details?Extracted answer: yes.(10)How much time does third parties or departments take to process their stages of workflow?Extracted answer: no problems reported.(11)Are all the stages carried out, considering all agents and sectors involved, in a sequence of production or the stages have lines of production that are not regulated within time intervals?Extracted answer: yes, in average ratio.IEP analyses:(i)Iterations: geometric.Case analysis: no unusual report.Suggestions: 0.Frequency: rare.(ii)Information flow: discrete probability distribution and monotonically decreasing.Data source: existing and partially defined in the ad hoc methods.Case analysis: data set at ad hoc level, as well as ad hoc methods give the service a continuous and timely flow of information (large quantity) for most stages in the sector analyzed. Only in decision-making for demand analysis, the working method assumes a continuous probability density distribution function monotonically increasing, so that the parameters that predefine the demand analysis are defined by previous experiences and/or objective data that subsidize decisions about local public safety, technical description of alarms, technical descriptions of use and installation, and so on. Probabilistic density takes shape as new information (variables) enters into judgment and/or cases in combinatorial form (variables that affect the decision in exchange), making the absence of a default pattern for judgment. For this reason, it is subject to the mechanisms of subjective reason and/or phenomena of perception towards mechanisms of cognitive reasoning and memory.Suggestions: it is suggested the formulation of a defined set of variables that affect the necessity or not of service demand. This would remove random hypotheses as to the judgment of the matter.Frequency: frequent.(iii)Maximum information entropy: discrete probability distribution and monotonically decreasing.Considering the vast ad hoc experience, the system takes on normal entropy producing identifiable results and easy guidance on the flow of information.Case 1 brief results: probabilistic densities are formed as a function of continuous variables present in the system. Inaccuracies may occur due to lack of predefined information.Case 2: bid price quotes (miscellaneous items for schools and administrative units).(1)How many agents participate in the execution of all workflows? (excluding other departments)Extracted answer: 1.(2)Describe the workflow in stages.Extracted answer: Figure 11.(3)How much information is processed in each stage? (average rate of very high, high, or low amount of information). Evaluate it according to items 4 and 5.Extracted answer: yellow color at flowchart shows a high amount of information to be processed along the whole workflow.(4)Is the information available in full details?Extracted answer: not for all third parties or departments.(5)What method agent uses to process the information (check the external supports and ad hoc actions)Extracted answer: there is a general organization of work and data. Information on sector analyzed is default, and no unusual events were observed. Third parties’ interactions can affect processing due to information asymmetries.(6)Does the processing take too much time to be finished?Extracted answer: some stages are time consuming while the most time consumed comes from third parties.(7)Does this processing achieve exact precision expected?Extracted answer: third parties or departments can make it harder to achieve depending on the information they share in the process.(8)Is the workflow execution dependent on other third parties or departments?Extracted answer: yes.(9)Are all the information related to third parties or departments available with full details?Extracted answer: no. This situation makes item 7 to happen.(10)How much time does third parties or departments take to process their stages of workflow?Extracted answer: due to repetitions of pathways, documents come and goe back sometimes. Frequent problems are related to the lack of information or attention related to third parties or departments. It takes an undefined amount of time being it at random for each type of demand.(11)All the stages are carried out, considering all agents and sectors involved, in a sequence of production or the stages have lines of production that are not regulated within time intervals?Extracted answer: due to information processing fail, iteration of stages, demands appear to start with no pattern, and other third parties’ time of processing, the whole system has not sequence among stages.IEP analyses:(i)Iterations: not geometric.Case analysis: although there is only a single form of iteration, it is responsible for the poor flow of information in terms of accuracy and time. The continuous repetition of the intersectoral iteration path generates a loss of deadlines and accumulation of execution activities due to the randomness with which processes return to the sector in concomitance with other demands that arise and/or return in the same way. This probabilistic behavior of the iteration generates direct effects for a possible joint probability density function. Also, serious disadvantages can be caused due to precision failures, leading the service or product use in loco, very costly and inefficient.Suggestions: third parties or other institution departments should have full data source shared with all the agents of workflow. Models and reference terms can be of use for creating tendencies and reducing variance among samples.Frequency: frequent.(ii)Information flow: continuous probability density function monotonically increasing.Data source: default and/or ad hoc and/or undefined.Case analysis: diverse sectors that elaborate the procedures to compose the quotation do not observe defined criteria of analysis to draw up the technical specifications and technical objects and thus, the sector of analysis of these items reprove the documentation and return it to the requesting sector.The information flow of the system is very high. The intersectoral relationship causes loss of accuracy of the transmitted content.Theoretically, it is possible to affirm that the ad hoc methodology is responsible for the regular amount of information to be processed by the requesting sectors since these may be mismatched, a factor that would cause a critical entropy.Suggestions: it is suggested that information used in this workflow should be defined through legal documentation and/or guidance manuals, information on how to prepare quotation requests and other technical criteria that are constantly disapproved in the quotation analysis sector. This procedure avoids repetitions and possible error transmissions that may not theoretically be observed by any of the steps.Frequency: frequent.(iii)Maximum information entropy: discrete probability distribution and monotonically decreasing.The system achieves discrete-type entropy and average stability based on the ad hoc skills of the agent responsible in the sector. However, significant losses are occurring in terms of time to perform all activities. Although there is stable entropy, it is strongly recommended to adjust the flow of information so that the probabilities of errors will not be expressed strictly. Thus, changes in the execution of activities of the other departments are vital to the quality of work and interpersonal relationships between agents and sectors.Case 2 brief results: the system, although with discrete entropy and with reduction of randomness, presents nongeometric iterations that characterize repetitions in the flow of information as well as diverse intervals for each stage of the flow, generating a time function as a precursor of possible probabilistic densities or differential entropy.Case 3: internal and external telephone calls.(1)How many agents participate in the execution of all workflows? (excluding other departments)Extracted answer: 3.(2)Describe the workflow in stages.Extracted answer: Figure 12.(3)How much information is processed in each stage? (average rate of very high, high, or low amount of information). Evaluate it according to items 4 and 5.Extracted answer: very high demand.(4)Is the information available in full details?Extracted answer: no; partially defined or undefined.(5)What method agent uses to process the information (check the external supports and ad hoc actions)?Extracted answer: there are agendas with default data. However, new information appears monthly and not data shared among departments. This telephony sector remains outdated constantly. Processing follows ad hoc methods that are not enough efficient in terms of time, efforts, and precision of data sharing.(6)Does the processing take too much time to be finished?Extracted answer: sometimes yes. Iterations can happen extending the processing even more in the sector. Iterations can lead to uncertain pathways of transferring information among other departments.(7)Does this processing achieve exact precision expected?Extracted answer: not always.(8)Is the workflow execution dependent of other third parties or departments?Extracted answer: yes; departments within the institution.(9)Are all the information related to third parties or departments available with full details?Extracted answer: no.(10)How much time does third parties or departments take to process their stages of workflow?Extracted answer: due to outdated data, information processing among departments can keep iterating until it finds the best solution.(11)Are all the stages carried out, considering all agents and department involved, in a sequence of production or the stages have lines of production that are not regulated within time intervals?Extracted answer: sequences keep iterating until it finds the best solution. No regular time interval. All stages are unstable in time lengths.IEP analyses:(i)Iterations: there is no occurrence.Case analysis: without unusual events.Suggestions: 0.Frequency: 0.(ii)Information flow: joint probability density function distribution.Data source: partially defined; undefined or ad hoc.Case analysis: the system information flow is too high. A verified point in relation to the information flow of the system is the condition in which the processing takes place which makes the precision very random as well as the required time is higher than the recommended in these cases of customer service centers.Defined as a monotonically increasing probability density function, this type of service presents a continuous growth of low probabilities of precision within the information flow.The sector does not have Internet, which makes it very difficult to search for information related to the institution and/or internal information required by other departments. The lack of Internet necessarily generates constant failures in the accuracy with which the information about the institution is transferred in terms of real time.In that sense too, there is no software that updates the phone extensions and institutional agents. Staff turnover as well as branch modifications are constant. There is no feedback between new information and old information which at the ad hoc level forces the agents to search randomly according to their level of knowledge about more than 500 agents of more than 1000 telephone extensions of the institutions.Suggestions: it is recommended that an information system be implemented that can be updated with extensions and names of institution departments, agents, and any other information to the external community. It is also imperative that all computers in the place have access to the Internet.Several internal activities of the institution are dependent on information in constant flow. In this sense, several sectors in their entropy analysis (IEP) had strong indications they need for certain types of information to be allocated in another information system that is not exactly the telephony sector itself, but a virtual environment that may gather internal activities with implementation of intersectoral workflows.Frequency: frequent.(iii)Maximum information entropy: differential entropy.Large and unpredictable variations as a system of continuous generation of uncertainties. The ad hoc functions in this sector are among the main causes of service relative stability. Although because of the large amount of information, ad hoc skills are not enough for processing of all information and time required. External support is highly recommended.Case 3 brief results: constant input of new information into the system and nonupdation of the processing of the variables that constitute the execution of the service in relation to this new information give the system, even in an efficient adaptive system, a differential entropy.

The practical use of mathematical modelling for workflows explained in Section 2 was developed within these 3 examples to conceive how methodology can be used as a tool towards empirical data.

The most important contribution within this research remains for soundness properties analysis in order to achieve correctness and completeness [24, 54]. There are several methods to achieve correctness or completeness, but it varies a lot depending on the empirical data and type of variables occurring. According to Liu [54], soundness investigations can be the basis of workflow improvement due to importance of finding undefined connections between workflow goals and state conditions.

The main basis of this research, information entropy, and probabilities can give a broader glance of how soundness properties of workflows can be optimized towards a precision performance. The three results show that semantics aspects of the method proposed can be empirically related, and this connection makes possible to analyze the system in order to achieve classification and decidability of system process of information.

Also it was verified that many types of empirical expressions towards theoretical basis of this research can be observed through the default questionnaire proposed. This analysis technique and theoretical model allows investigations of soundness aspects of workflows not resulting specifically into correctness or completeness of unique solution. However, these investigations demonstrated in the three examples given the instability aspects of information and its processing, leading the researcher to a discrete tool of classification and posterior decidability for correctness and completeness properties.

The mathematical definitions used in the theoretical framework were not demonstrated with specific calculations for each of the three results in this section. One of the main aspects of this research was to create a method in which not skilled mathematician professionals or researchers could access and obtain the same results as a mathematical notation could give. The questionnaire tool presents this main point to perform the soundness analysis of workflows, together with the theoretical explanations for each of the probability categories and information entropy axioms.

The correctness rigidity of a workflow does not mean its accuracy or possibility of improvements [54] mainly in CAS workflows with continuous variables being the object of activities execution. In this aspect, the three examples analyzed reveal an important aspect of methodology that is to check how soundness aspect of workflow is presented. The main feature of mathematical modelling proposed in this research resides at the analysis of soundness and workflow management rather than giving prompt solutions (correctness/completeness [24]) or a stationary model to real life problems where semiotics problems might arrive due to nature of the CAS system and continuous variables. The theoretical information entropy and probabilistic basis of study of this model supports a broader range of execution, planning, and decidability analysis regarding the soundness analysis of workflows. Following this path, methodology basis definition (Section 2) reveals a natural tendency of biological interactions to deconstruct logic stability in any level at any system. This semidefined state of information processing (semiotic and probabilistic state) of the methodology section can be rich in the view of more prominent lines of workflow soundness analysis as a self-adaptive system.

It is possible to consider that probability, information theory, and continuous variables (the main framework of the model) within a system are inconsistent data to support a workflow planning and precision reaching. However, considering categories of soundness given by Aalst [24], this framework could be unsuitable for robustness of the model if results obtained in this research be observed only through the correctness and completeness point of view. In terms of analysis of a workflow system, one of the standards workflow language as Petri Nets constructs XOR/AND-splits/joins are not the only visible semantics within a system when comparing this language to the scope of interactions and iterations semantic investigation of soundness properties in a given workflow within IEP method. A broader and more complex language tools can be used by the agent to search an empirical or theoretical proof about where the system and its deadlocks, livelocks, flaws, etc., are [24, 54]. The main soundness effect about this model is oriented to the possibility of seeing how bounded and unbounded conditions of information in a workflow can present asymptotic behavior (monotonically increasing and decreasing), leading the analyst to the higher precision of system performance and soundness analysis. This feature is very important in order to achieve higher correctness of system performance as Liu [54] pointed out.

Another dimension of the model proposed, besides analysis of a workflow system, is in terms of designing a workflow. As far as activity execution must be sufficient in some soundness criteria like option to complete, proper completion, and no dead transitions [24], the main goal of any workflow performance is accuracy at most. Building a workflow and its language in a broader sense is something that could be carried out considering the topics about uncertainty-based systems for both discrete and continuous variables. Following this path, a PSPACE system with unbounded conditions was found in example 2 of Results section, where correctness of the system is the most important priority in order to achieve an end point at workflow execution. However, the unbounded condition makes the system improper for achieving this performance through a PSPACE path, making it questionable whether workflow execution could be completed in a desired performance. Considering the model proposed, the soundness of analyzing this example 2 system in the view of probability and information theory axioms allows the agent to perform an ad hoc method of work, supporting the soundness properties in the direction of a correctness performance. In this sense, even with unbounded condition, discrete or continuous variables can be treated from the soundness point of view by the agent who can conceive a dynamical input of work forms in order to achieve high precision and decidability for the event.

However, a case of an unbounded and asymptotic workflow information that present weak soundness and as far as the system have monotonically increasing source of information as well as the lack of pattern formation in the perspective of individual cognitive skills can be limiting to the workflow execution and high standards of accuracy.

It is possible, for example, to observe in example 3 of Results section, that the workflow performance is very unstable due to bad planning design of activity execution that is, worth to say, very simple and a daily activity of attending and forwarding phone calls. The lack of a central information processing in example 3 was not observed when workflow execution was planned. Indeed, it is naturally obvious that any institution will create a nod of information processing and data storage in this type of work; however, this does not match with empirical flow of information when data available are in chaotic behavior during its formation or in other uncertain features that can affect the system. These statements are also examples of the soundness analysis of workflow of example 3. In case 3, it was suggested for correcting the random behavior of variables, central data of storage, and information processing in which several agents feed the system information variables. This way, like Internet of Things, this central nod of information can update data that come as a continuous source of information (note that this is an abnormal situation not usually find in this type of services, i.e., several sectors in a firm change their phone number due to administration instability, like 6 times within a semester) since the pattern formation of data is not defined on the basis by discretization process.

In the first case of Results section, it is possible to find a PSPACE in bounded condition where despite of weak soundness occuring in the information flow, that would be a deadlock in the system. However, the information processing in a PSPACE and bounded condition could not be a limit to the correctness to be achieved through the agent processing skills. In this specific situation, as a CAS domain of information processing, the agent role on the accuracy of activity execution represents exactly one of the main problems discussed in this research that is the presence of the continuous variables, in this case, sourced by the agent itself and not by the nonliving components of the system or the data availability. The IEP soundness analysis could uncover this scenario with the objective to have a positive performance towards precision of information processing. This possibility was already proven to be true in Liu [54] through different techniques.

4. Discussion

The objective of producing reports for the analysis of labor activities is to deductively understand the workflow system in a global manner, based on some mathematical premises. In this way, the method is given a theoretical view of the phenomenon, making room for the researcher to see the system in its probable and entropy state of work production. Although the proposal is theoretical, at the moment the analysis is performed, empirical evidence (facts) corroborates the finding about the flow of information on the research conducted.

Thus, the IEP method differs from others by conceiving the problem and analysis from a theory about the phenomenon, relying on sporadic empirical evidence, but mainly based on the empirical analysis that consists of the verification about the availability of information for the execution of a work and how this information is processed by the agent(s). Also, it is possible to observe the evolutionary aspect of the system as a system that may or may not achieve different constitutive properties as the form of the probabilistic distribution of the information, generating differentiated levels of intervention that oscillate in time and can be monitored and strategically directed. An example of using the IEP method to an analogous strategy [55] is indicated by Van Der Aalst et al. in discriminating flexibility, enforcing guidelines, and declaring framework. Comparing the two methods, the possible flexibility of the framework is reached when experienced analysis mechanisms of the user are caused by the use of a certain workflow for some time; however, the flexibility model reached by defining workflows through a conceptual mathematical structure allows a greater reach between analyses with empirical evidences and analyses by inference, also leaving open guidelines for a late reformulation. In this sense, there is a complementarity among the methods aimed at the improvement of the system.

In general, the importance of using a noninductive method to analyze workflows is not to incur in the error of not verifying a flow while a system that evolves, adapts and needs a margin of evaluation, characteristics that allow the manager to observe the dynamics of the system over time and the system as a whole [3, 28].

It was also verified that the treatment of the evidences that prove the theorizations about the flows was realized not only at the moment of the analysis of the flows. Also, the opinion of other analysts (other staff or related) helped to identify other empirical evidences that sustained the identified probabilistic distributions. This way, the source of options for solving the problems of each flow by the proposed method should not be understood as exhausted because after the IEP indicates the theorization, the analysis phase is a very important task.

It can happen that ad hoc methodologies can be usually considered as important attributes as a selection of human resources in a firm, but they are not a vital measure of a firm's survival if the information systems are obsolete and/or precarious. Also in another sense, if human resources are replaced and it causes the firm’s vitality to be reduced completely, it also demonstrates the inefficiency with the management of the information in the workflow due to the invisible nature of the data source.

It is possible to list some characteristics of the proposed methodology as follows:(i)Identification of presence/absence of information systems and types(ii)Critical points in the flow of information(iii)Identification of the presence of ad hoc models and work performance(iv)Evolution of the information system of predefined mathematical definitions and modelling(v)Inference about accuracy and processing time of information(vi)Decision making for management(vii)Strategic analysis/monitoring of sectoral and intersectoral flows(viii)Empirical analysis of the information flow excluding the subjective perception of the analyst and the agents(ix)Probabilistic and entropy observation of information in complex adaptive systems(x)Participatory management and continuous improvement(xi)Flexibility of analysis, investigation of evidence, and modelling of workflows

A workflow methodology for scientific research in the area of statistics similar to IEP is presented by Gabry et al. [56], in which the empirical evidence is contrasted with simulations of the phenomenon for PDF distributions of the analyzed events [56]. The main difference between the two approaches is how empirical evidence is used to constitute an overall analysis and how variable’s distributions are predefined for posterior investigation purposes. This feature makes IEP analysis broader in flexibility.

The IEP together with a Bayesian analysis [56] represents the attempt to accurately cover the expected results without considering the empirical evidence as the only analysis tools, thus suggesting a theorization about the event that fits with the actual fact to seek a statistical method. This research [56] shows an interesting form of workflow, since alternative methods are proposed to reach the precision of the results that consist of very heterogeneous probability distributions in the set of samples. Analogously, the method of analyzing the information flow proposed in this research also represents the same approaches; however, it limits itself to only formulating the mathematical structure of the phenomenon and its probabilistic distribution state, leaving aside the search for all the evidences (analogously, alternative methods) that prove the theory being that task of responsibility of the manager and his choice by the method of work that is more appropriate with the probabilistic distributions of the system in question.

Another interesting point of analysis of workflows is the symmetry of information between agents, sectors, and public or private firms. The symmetry of information as already described by Holmstrom and Tirole [57] plays a fundamental role in the evolution of the administrative structure aiming at the best cost benefit of those involved in the negotiations. A workflow is also a negotiation from the point of view that a job demand, when requesting information to be processed and reached the final execution, depends on the information being clear and its methods of use being as visible as possible to sequencing in the flow when it is dependent on time or on complex operational steps. It also presents as support for the equity and equality of information among agents in terms of professional performance, self-esteem, and motivation in a competitive or social exclusion environment [58].

The role of leadership in the organizational structure of a firm also has a great influence on asymmetric workflows among agents, as well as on the transmission of messages by the leader, which depends on the correct processing to accurately achieve the expected result. An important feature of analyzing workflows is that it is not a question of finding great talents as human resources that account for chaotic systems, but of making the execution of the activity viable to anyone (avoiding turnover), which brings benefits not only for the firm, but for the manager in his managerial responsibilities, the production of the firm, and the time in which the activities have expression as a whole. Another important aspect of the IEP analysis is the way it is conducted by providing an environment for participatory management among agents and the vertical hierarchy of an organization.

A critical point found in linear workflow models, understood, for example, as consisting of well-defined physical or digital information in which variables are given as discrete, is that it does not always can be observed and known by other individuals. This case consists of a communication problem within the firm as well as organizational routines of work and sociability. This can affect processing skills and precision on productivity but remains as another issue to deal with in the organizational structure. Also in situations where information is in an ad hoc state consisting of continuous variables, linearity in the flow of information can be compromised by noises caused between the various ways of understanding a given service and at the same time communicating it to another person [1].

The context in which variables that promote oscillations for precision in discrete or continuous workflows are also relevant to differentiate methodologies such as Bayes and Kolmogorov, which can serve as a tool for analysis of evidence in a discrete framework of analysis or with adaptations to constitute a hybrid system. Both have preconceptual definitions that do not allow them to be used roughly as tools for continuous variables as happening to the information entropy and probability (IEP) method. The justification of this proposition gives the existence of the continuous variables of the event, which as products of CAS [37, 38] are not previously defined as universal scientific evidence in terms of quantitative parameters already well described in science but as components that constitute a workflow in continuous adaptation between the organism and object. Thus, the IEP method of workflow analysis allows the identification of patterns of workflows, which in a second moment, can be worked with the choice of an appropriate methodology, being acceptable in a broader sense of qualitative or quantitative analysis.

It is highly suggested for other researchers in this field or related modelling dynamics taking into consideration mainly the several properties of biological systems as an adaptive integration of multidimensional analysis and mathematical approaches.

This research was exclusively performed for qualitative mathematical purposes evaluation and not an in-depth calculation analysis for workflows. Qualitative analysis can be applied to flowcharts or other qualitative methods of workflow and information processing analysis. Though calculations can be useful for digital workflows and information processing considering for it, several approaches to be implemented by using equation (6) and the other derivations. A very similar approach towards a quantitative method (full mathematical descriptive formulation) for objects to objects interaction and formal framework for probabilistic unclean databases (PUD) was proposed for data analysis by Ilyas et al. [59]. A full description for structured predictions to achieve probabilistic modelling was developed [59], and it is suggested as a tool of information processing as the authors, require for further investigations of this new techniques, programming parameters that consider biological empirical variances that can serve as input data and more descriptive performance of CAS as this issue (quantitative descriptive methods) was not covered by this research.

Although it was found in a recent research by Trübutschek et al. [47] that there is no subjective source for cognitive executions via working memory, it is interesting to note the great dimension of analysis that exists considering such issues as subjectivity, sensory memory, and perception as a field of study not yet defined in terms of effects for precision productivity in information processing in a labor activity. At this point of the analysis, no empirical investigation had been carried out to bring information about how an agent processes information in terms of quantity and quality. Instead of creating a parameter on this, the research on workflows dealt with the topics related to cognitive sciences as an important point of the analysis, which composes the continuous variables and which allows to infer the probabilistic distributions of information flow in environments of work. The possibility that human beings process information devoid of subjectivity in temporal and spatial terms [47] and even in a broader sense is a knowledge that contributes to the understanding that the influence an agent can have in processing information also determines the precision and production of the execution of activities.

5. Conclusion

Epistemologically and scientifically, the importance of using a noninductive method to analyze workflows is not to incur in the error of not verifying a flow of information while a system that evolves, adapts, and needs a margin of evaluation, characteristics that allow the manager to observe the dynamics of the complex adaptive system over time and the system as a whole.

It was also verified that the treatment of the objective and subjective (qualitative research) evidences that prove the theories about the flows can be successful not only in the moment of the analysis of the flows, as it is further processed in the opinion of other analysts that could identify more empirical evidences that sustained the identified probabilistic distributions.

Thus, ad hoc methodologies are always important as a selection of human resources, but they are not a vital measure of a firm’s survival if the information systems are obsolete or precarious.

Despite of productivity in a firm can be predicted by a mathematical formula in methodology in Section 2, reaching precisions of 100% in workflows can be challenging and more suitable for artificial intelligence strictly. This must be observed in the same proportion of impact as an organism’s cognition potential to adapt the information or its processing into new possibilities of structures and/or modelling. For this last sentence, artificial intelligence is very unsuitable nowadays for this purpose according to several modern authors.

The importance of analyzing a workflow can be verified in several dimensions of human organizations. As presented in results section, the information flow within a workflow influences the execution of agent activities, the production of a firm, motivation, and professional stability of agents, participatory management, new tools for risk analysis, reduce bureaucratic management, management of complex administrative processes, impartiality of analysis, optimization of natural and financial resources, improvement of ad hoc performances, sustainable development, intelligent construction and other aspects in areas of related knowledge.

Another important aspect of mathematical modelling for workflows is defining ad hoc models as composed of a complex adaptive system formed by the cognitive dimension of the agent, knowledge, the external support of the work environment, and the experience lived by the agent, which results into positive or negative potentials of information processing. These processes can generate a probabilistic and entropy production of information, therefore, in the precision with which an activity is executed. The definition of physical mathematics that is verified theoretically and by empirical evidence in the diagnosis of flows also defines the time required to perform the activity, which, in the whole of the analysis, is the information flow through time which can occur in a geometric or nongeometric way. For nongeometric information flows, time assumes another role in the complexity of the events, generating other probabilistic and entropy definitions. Also, a mathematical modelling for workflows helps to identify where discrete and continuous variables start (ad hoc models). It is not a definite, exact, and precise line to identify, but the theorization makes it possible to search for the evidences that prove the theory supported by physical mathematical descriptions. In several analyses, the influence of time as a factor that promotes the accumulation of activities to be performed, as well as loss of production frequency, was observed generating diverse distributions in the system.

A last arising conclusion also addresses an equation for productivity (6) patterns in workflows. This equation, to avoid calculations, was transformed into qualitative research constituted by specific questions in which it becomes useful for professionals not skilled in mathematical calculations. Also, this research did not develop other implications in the use of equation of productivity due to restricted range of issues that this research was designed for. Further investigations are highly recommended for digital workflows and other mathematical approaches on behavioral and cognitive sciences.

Data Availability

All data are available within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.