Data Flow Diagram
Phase 2: Process Concept Evolution
Henrik von Scheel , ... Ulrik Foldager , in The Complete Business Process Handbook, 2015
Data Flow Diagram
A Data Flow Diagram (DFD) is a graphical representation of the "flow" of data through an information system (as shown on the DFD flow chart Figure 5), modeling its process aspects. Often it is a preliminary step used to create an overview of the system that can later be elaborated. DFDs can also be used for the visualization of data processing (structured design) and show what kind of information will be input to and output from the system, where the data will come from and go to, and where the data will be stored. It does not show information about the timing of processes or information about whether processes will operate in sequence or in parallel.
It is common practice to draw the context-level data flow diagram first, which shows the interaction between the system and external agents that act as data sources and data sinks. This helps to create an accurate drawing in the context diagram. The system's interactions with the outside world are modelled purely in terms of data flows across the system boundary. The context diagram shows the entire system as a single process and gives no clues as to its internal organization.
This context-level DFD is next "exploded" to produce a Level 1 DFD that shows some of the detail of the system being modeled. The Level 1 DFD shows how the system is divided into subsystems (processes), each of which deals with one or more of the data flows to or from an external agent, and which together provide all of the functionality of the system as a whole. It also identifies internal data stores that must be present in order for the system to do its job and shows the flow of data between the various parts of the system.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780127999593000021
Diagram Types
Susan Fowler , ... FAST CONSULTING, in Web Application Design Handbook, 2004
Software Design Diagrams
Software analysts and designers use a variety of diagrams, not for variety's sake but because different types of diagrams highlight different aspects of a system. For example, data-flow diagrams show the functions of the system, state-transition diagrams show the timings in the system, and entity-relationship diagrams show the data relationships.
The diagrams software developers use will also depend on their development culture—in other words, what most people are familiar with, what your spec-writing guidelines say, whether the software company does mostly transactional systems or object-oriented systems, and so on.
The three most common software diagrams and a few of their variations are listed in this section. For information on other types of software diagrams, see Resources.
Data-Flow Diagram
The data-flow diagram (also known as the DFD, bubble chart, process model, business-flow model, workflow diagram, or function model) is good for displaying the functions of a system but not good for modeling databases or time-dependent behavior (Figure 13-30).
The data-flow diagram (DFD) pictures a system as a network of functional processes connected with flows, plus occasional collections (called stores) of data (Yourdon 2001, pp. 1–3).
Primary Symbols
Processes or functions are shown as circles or sometimes as rounded rectangles. A process shows a part of the system that transforms inputs into outputs. The label should be a word, phrase, or short sentence that says what the process does—for example, "Find information on DFDs."
Flows are shown as arrowed lines (either straight or curved). The label should say what kind of information or item moves along the flow—for example, "Web link."
Stores (places where data are stored) are shown as two parallel lines or an open-ended rectangle. Not all systems have stores.
The label is generally the plural of the name of the items carried by the flow into and out of the store—for example, "Web links." (This implies that the same items go in and out, so the flows into and out of the store will have the same labels.)
Terminators, external entities or people with which the system communicates, are shown as rectangles. The label is the name of the terminating entity (for example, "Web application book"), person, or group of people.
Manage Complexity by Breaking the Diagrams into Levels
Because the diagrams can become as complex as the systems they describe, analysts have a method for breaking up a diagram into manageable pieces: They create levels. They start with one "context" diagram that shows the entire system at a glance, numbering the major functions—see, for instance, the numbered bubbles in Figure 13-30. Then, subsequent diagrams use subnumbers that refer back to the context diagram—"1.1 Receive orders via web site"; "1.2 Receive orders via phone"; and so on (Yourdon 2001, pp. 15, 19–20).
Entity-Relationship Diagram
The entity-relationship diagram (also known as the ERD or E-R diagram), is good for describing the layout of a stored-data system. It is not good for modeling functions or time-dependent behavior (Figure 13-31).
The entity-relationship diagram (ERD), because it is relatively simple and familiar, is a good communication tool. It can be shown to:
- •
-
Executives who ask about the data used to run the business.
- •
-
Systems analysts who need to see the relationships between data storage systems.
- •
-
The data-administration group that maintains the global, corporate-wide information model.
- •
-
The database administration group that manages the corporate databases and implements changes.
Primary Symbols
ERDs have two main components: object types and relationships. Object types are shown as rectangles. An object type represents a collection or set of objects in the real world. The label is a noun or name, usually singular.
Note that objects in an ERD can correspond to stores in a related DFD. For example, if there is a CUSTOMER object in the ERD, there should be a CUSTOMERS store on the DFD.
Relationships are indicated with lines (or diamond shapes). One-to-one and one- to- many relationships can be indicated using single-headed arrows (1 to 1) and double-headed arrows (1 to many). Direction (from Object 1 to Object 2) can be shown with the arrows as well. A required relationship can be shown with a short line; an optional relationship can be shown with an open circle.
Variations
Unified Modeling Language Logical and Physical Data Models
Unified Modeling Language (UML) logical and physical data models are good for modeling object-oriented databases. The diagrams can indicate inheritance as well as a wide variety of relationship types.
The diagrams can be used to show either logical (Figure 13-32) or physical models (Figure 13-33).
- •
-
Logical data models (LDMs) show the logical data entities, the attributes describing those entities, and the relationships between entities.
- •
-
Physical data models (PDMs) show the internal schema of a database, including the data tables, their data columns, and the relationships between tables.
The visual components are rectangles, with an area at the top for the object name, and lines that show the relationships between tables and sometimes also inheritances. For more information on notation and analysis, see the books listed in Resources.
State- Transition Diagram
The state-transition diagram (Figure 13-34) is good for showing a system's time-dependent behaviors. Originally designed for real-time systems such as process control, telephone switching systems, high-speed data acquisition systems, and military command and control systems, state-transition diagrams are now used whenever timing might be an issue—for example, if thousands of terminals might hit a database at the same time or when activities occur in flurries, as shown in Figure 13-35.
Primary Symbols
State- transition diagrams have symbols for states, transitions, and conditions and actions.
States are shown as rectangles. Labels should describe the state the system can be in—for example, WAITING FOR CARD.
Transitions are shown as arrows connecting related pairs of states. Although the transitions are not labeled, rules about valid connections are implied by the arrows themselves. For example, in Figure 13-35 you can see that the WAITING FOR CARD state can return to IDLE or change to WAITING FOR PASSWORD. It cannot jump directly to DISPLAY BALANCE. You can also see that IDLE is the first state and WAITING FOR CASH REMOVAL is the last (at least in this transaction).
Conditions and actions are shown as a line and two short sentences, with conditions above the line and actions below it.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978155860752150013X
Parallel Computing
B. Stein , J. Chassin de Kergommeaux , in Advances in Parallel Computing, 1998
3.1 Compounding component
Structuring the environment as components of a data-flow diagram is well suited for the implementation of modules needing a single access to the information derived from the trace. An example of such a module could be one that computes the CPU usage of nodes—for each received thread state which is not a blocked state, the module adds the duration of the state to an accumulator corresponding to the thread's node; it is never necessary to access the state again. Another example is a passive visualization module that, for each received entity, displays a corresponding visual representation that cannot be interrogated or changed.
However, if the module receiving the data is interactive, it needs to access the data objects several times. This is the case of a visualization module that gives the possibility to inspect the displayed objects or provides a historical vision of a trace (allowing one to display previous portions of the trace). If such a module receives the data objects traveling on the data-flow graph independently, it should either store the objects or have to query them again each time they are needed. The first solution results in added complexity of such modules, to manage a large volume of data, as well as data replication if more than one module of this type is used. The second solution results in added computation costs to read and simulate the trace several times.
To overcome this contradiction, a new type of component was designed and implemented, to produce compound objects. The compounding component encapsulates the elementary objects produced by the simulator into a single object which represents the current observation window of the parallel program execution. This compound object is produced by the compounding component and consumed by the following components in the data-flow graph (see figure 2). Each elementary object input by the compounding component is linked to the compound object. At regular intervals, if the compound object has been changed, it is put in the data-flow graph, so that the following modules of the graph can take the changes into account. All the accesses of the other components to the elementary objects are done through the compound object, according to a well-defined protocol. This protocol defines how to obtain global information relative to the current observation window (number of nodes, maximum number of threads in each node, etc.), get the elementary objects in a temporal sub-window, get more information concerning an elementary object and ask for the inspection of an elementary object. This way, all the complexity of storing and accessing the large quantity of data generated by the trace is isolated in the compound object.
Besides simplifying the construction of data-consuming modules, centralizing access to the data has some other advantages for the construction of filters (see section 3.3) and for the memory management by the controller component.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0927545298800385
Designing Diagrams
Susan Fowler , ... FAST CONSULTING, in Web Application Design Handbook, 2004
Provide Filtering Options
For diagrams with lots of data (for example, a data-flow diagram for all of the company's actuarial tables), users may need to isolate one or two datasets from the dozens or hundreds that could appear on the window.
To filter a diagram, you may be able to adapt whatever filtering interface you already have in your system. Chapter 6, Data Retrieval: Filter and Browsing, covers filtering.
When there are many elements or when the diagram is used to analyze problems, it may make sense to provide a query-on-query option. In other words, rather than asking users to run new queries each time they have new questions, let them refine the search starting from the current set of data.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781558607521500128
ACTIVITIES, FUNCTIONS, AND PROCESSES
David C. Hay , in Data Model Patterns, 2006
DEFINITIONS
In his original framework, John Zachman labeled Column Two Function. In both the original data flow diagram notations and later in business process re-engineering, what a company does was called a process. In common conversation, we talk of activities. Indeed, in common conversation all three terms are bandied about as though they were synonymous. For our purposes, however, it is important to make distinctions among these terms.
- –
-
An activity: the most general super-type that encompasses all the following terms.
- –
-
A function is a type of activity to carry out an objective of the enterprise. It is described solely in terms of what it is intended to accomplish, and without regard to the technology used to carry it out or who is to perform it. * This is also described without reference to time. Functions represent a conceptual version of the Business Owner's View in Row Two. They begin from a global perspective (What is the mission of the enterprise?) and may be broken down to reveal a considerable amount of detail.
- –
-
A process is a type of activity performed by the enterprise to produce a specific output or to achieve a goal. It may or may not be described in terms of the mechanisms used or the parties performing it. A set of processes is usually described in sequence.
A business process describes an activity as carried out by business people, including the mechanisms involved. This is in the domain of Row Two, the Business Owner's View. Alternatively, the Architect in Row Three sees a system process that is about the data transformations involved in carrying out a business process. * In either case, processes can be viewed at a high level or in atomic detail.
Figure 3-2 shows the Row Two (Business Owner's View) of activity, with function and business process as sub-types. More about this later.
The terms are compared in Table 3-1. For this book, the term activity will be used as the title of the Architecture Framework column, in that it encompasses all of these concepts, although in this model activity will be distinguished from system process, which is described in the Row Three section (see pages 142 through 157).
Table 3-1. Activity types.
Term | Framework row | With mechanisms and parties? | In sequence? |
---|---|---|---|
Function | 2 | No | No |
Business Process | 2 | Yes | Yes |
System Process | 3 | No | Yes |
Activity | 2,3 | Yes/no | Yes/no |
This book primarily discusses functions and business processes (see Figure 3-2) when talking about the Business Owner's View and system processes when talking about the Architect's View.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780120887989500048
Software engineering
Paul S. Ganney , ... Edwin Claridge , in Clinical Engineering (Second Edition), 2020
Abbreviations
- AI
-
Artificial Intelligence
- ANN
-
Artificial Neural Network
- BLOB
-
Binary Large Object
- CAD
-
Computer Aided Design
- CBCT
-
Cone Beam Computed Tomography
- CC
-
Change Control
- COTS
-
Commercial Off-The-Shelf Software
- CSV
-
Comma Separated Values
- DFD
-
Data Flow Diagram
- EPROM
-
Electrically Programmable Read-Only Memory
- ERD
-
Entity-Relationship Diagram
- FEA
-
Finite Element Analysis
- FIFO
-
First-In-First-Out
- IaaS
-
Infrastructure as a Service
- IDE
-
Integrated Development Environment
- IDL
-
Interactive Data Language
- IR
-
Infra-Red
- JVM
-
Java Virtual Machine
- LAN
-
Local Area Network
- LCSAJ
-
Linear Code Sequence And Jump
- LIFO
-
Last-In-First-Out
- MFC
-
Microsoft Foundation Classes
- MIMICS
-
Materialise Interactive Medical Image Control System
- NoSQL
-
Not Only SQL or No SQL
- OBS
-
Output-Based Specification
- OOP
-
Object-Oriented Programming
- OS
-
Operating System
- OTS
-
Off-The-Shelf (software)
- OTSS
-
Off-The-Shelf Software
- PAT
-
Portable Appliance Test
- PPM
-
Planned Preventative Maintenance
- RDBMS
-
Relational Database Management Systems
- RFID
-
Radio Frequency Identification
- ROI
-
Region Of Interest
- ROM
-
Read-Only Memory
- RTM
-
Requirements Traceability Matrix
- SaaS
-
Software as a Service
- SASEA
-
Selecting Appropriate Software Engineering Assets
- SCCS
-
Source Code Control System
- SCM
-
Software Configuration Management
- SOUP
-
Software Of Unknown Provenance
- SQA
-
Software Quality Assurance
- SQC
-
Software Quality Control
- SQL
-
Structured Query Language
- SRS
-
Software Requirements Specification
- SVN
-
Apache Subversion
- UML
-
Unified Modelling Language
- VB
-
Visual Basic
- VCS
-
Version Control System
- WIMP
-
Windows, Icons, Mouse, Pointer
- Windows CE
-
Windows Compact Edition
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780081026946000097
Understanding Development Methodologies
Charles D. Tupper , in Data Architecture, 2011
Structured Analysis
The structured concepts reached their peak in the structured analysis approach, which is currently in existence in many different forms. In the structured analysis approach, the current application system was captured in the "data flow diagram." The technique itself advocated the separation of the logical design and physical implementation. To achieve this, the existing data store was viewed as the old physical model, and a new logical model was derived from it. If there were no previous system in place, then the manual process would be analyzed as if it were one and documented as so. This new logical design was then focused on what was done rather than how it was done.
Changes could then be applied to the logical model that encompassed the client's desired changes. The changed model would become an even "newer" new model and be translated into a new physical model for implementation. As a result of the impact this approach had on the evolution of the relationship between the business problem and the program solution, the concept of modularization was refined. This refinement gave uniformity to program module structure, interface and communication restrictions between modules, and quality measurements. Later, some of the significant findings during this time were useful in forming the conceptual roots of object oriented design, which we will cover in more detail elsewhere.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123851260000048
Managing Change with Incremental SOA Analysis
Douglas K. Barry , David Dick , in Web Services, Service-Oriented Architectures, and Cloud Computing (Second Edition), 2013
Decomposition Matrix
The decomposition matrix tool generates either business process or data flow diagrams. It does this using an algebra for design decomposition that Mike Adler published in the 1980s. 1
A feature of the decomposition matrix is that it does not look at all like a business process or data flow diagram. Business process diagrams, for example, are a great way to design a workflow. The problem for most of us, however, is that if we are familiar with a given workflow, it is often difficult to see how it could be significantly different. We all tend to repeat or recreate what we know. The decomposition matrix, however, requires us to only think about inputs, outputs, and how they relate to each other. The diagrams are generated for you based on the matrix of inputs, outputs, and relationships.
A significant issue when making any systems change, particularly in large organizations, is getting agreement on what the changed system should do. This compounds the situation where it is often difficult to see how the changed system should be. Not only might individuals have a difficult time thinking of how their workflow could be different, there might be entirely different views of the workflow in different parts of an organization. A tool like the decomposition matrix can be a way to address different views within an organization by getting people to only think about inputs, outputs, and how they relate to each other.
I have the decomposition matrix tool implemented on one of my websites. 2 It is free to use. It can be used in a group setting if you have a computer with an Internet connection hooked up to a projector.
Figure 10.1 shows a decomposition matrix of inputs, outputs, and relationships. It allows you to discuss detailed issues one at a time instead of trying to juggle multiple issues all together in a design. You only need to make a series of binary decisions. Such a decision is whether a given input is related to a given output. Sometimes that can generate a great deal of discussion and bring out design issues not previously mentioned. The decomposition matrix assembles these simple decisions and generates a decomposition that might help you with your design process.
The tool on my website generates either business process or data flow diagrams. Most people are familiar with business process diagrams. The data flow diagrams are a way to get at the decomposition of services in an SOA. The decomposition matrix has a specific definition of atomicity. Atomicity generally means that a business process cannot be decomposed further (see page 17 for a general definition of atomic services). The specific definition of atomicity used by the decomposition matrix is that a business process task or a data flow process is atomic if every input relates to every output in the decomposition matrix. In other words, there are check marks in every box of the matrix. Atomic tasks and processes are an important aspect of the incremental SOA analysis.
It is possible that the decomposition matrix might give you some new ideas or help you get past a sticking point in your design process. In that way, it acts much like having another designer in the room. The decomposition matrix is not a design methodology. It is meant to be a design aid. You can use it with whatever methodology you prefer since it is just another "designer" in the room.
The next section provides an example of how this tool works.
Business Process Diagram
To illustrate how the decomposition matrix works, I will use an example from a series of blog posts that start at http://www.designdecomposition.com/blog/?p=6 . This example uses a set of inputs and outputs for a travel coordinator. Using those inputs and outputs, the decomposition matrix tool will generate a business process diagram.
This example is from the first edition of this book. The idea that a VPA—like the one in the story about C. R.'s business trip—could make all travel arrangements was not considered when I wrote the first edition. Nevertheless, making travel arrangements is an almost universally understood process so I decided it is still a useful example for the decomposition matrix.
The inputs and outputs in Figure 10.1 should be familiar to most people who have taken a business trip. They involve finding airline flights, a rental car, and hotel rooms for a set of travel dates along with making the reservations and obtaining driving instructions. Figure 10.1 shows this decomposition matrix.
You need to consider the relationship between only one row and one column at a time when using the decomposition matrix. These are the binary decisions mentioned earlier. For example, you could describe the relationship of the first row and first column as "the input of travel dates and locations that occurs before or concurrently with the output for a flight availability request."
The portion in italics is an example of the type of phrasing you should use. You may read across the row or down a column using the italicized phrasing.
Considering just one row and one column at a time makes it easier to work with larger designs. There is no need to try to keep the entire design in your head. You just need to think about each relationship one at a time.
Arranging flights involves using the travel dates and locations to request a list of available flights. Sometimes you may need to make multiple requests with different flight times or you may make requests to multiple airlines. Figure 10.1 shows this with a check mark in the second row, flight availability response, and first column, flight availability request. The third row, flight reservation response, is not checked in the first column, because you cannot have a response before a request.
The fourth column shows the inputs that occur before or concurrently with the input to a car rental reservation request. Before making a reservation request, you need to know that cars are available for your travel dates and locations. You also need to know if flights and hotel rooms are available. You do not, however, need to reserve a room before a car. On the other hand, car rental agencies often ask for a flight number at the time of rental. So there is a check mark in the third row, flight reservation response, for the fourth column. This occurs before or concurrently with the output for a car rental reservation request.
The generated business process diagram is shown in Figure 10.2. The diagram uses a subset of the business process modeling notation (BPMN). 3 The tool does not generate labels for the tasks. I have added task labels to this diagram.
There are a couple of ways the generated diagram can give you hints that there are problems with the check marks in your decomposition matrix:
n If you have had trouble coming up with any of the labels, that could be a hint that the inputs and outputs might not have the correct check marks or perhaps an input or output was overlooked.
n If the diagram is confusing, that is a hint that the check marks might not be correct. An example of something confusing is a request for something coming in after its related response.
You can "play" with the inputs and outputs to see what happens to the generated diagram. This is not a complete design tool. At some point you may want to transcribe a generated diagram into your design tool, much like you would if you used a whiteboard.
Data Flow Diagram
The next example generates a Web services API or services interface layer for legacy systems. Figure 10.3 shows the decomposition matrix. The inputs are from some type of legacy system. Some of the possible outputs are also shown in the decomposition matrix. It is obviously simpler than the real world, but it serves as an illustration of how the tool can be used.
You can phrase a relationship in Figure 10.3 as "the input of invoice is used directly or indirectly for the output of payments." The italicized portion of the phrase is important. Note that this is different from how relationships are described for business process diagrams. In this case, we are dealing with data flow and not the sequencing that business process tasks require.
Figure 10.4 shows the decomposition of services based on the matrix. 4 The processes have been labeled. Just like with the business process diagrams, the tool leaves labeling up to the user. Again, if it is difficult to label a process or if the diagram is confusing, that is a hint that the inputs and outputs may not be complete or that some check marks are missing.
The top-level processes in Figure 10.4 represent the Web services API or service interface layer. Some of the top-level processes have multiple outputs. This indicates that the input parameters will need to specify the XML tags (in this case) to include in the output. Such input parameters are not shown in data flow diagrams, but they will be needed when you design the services. Any data flow diagram shows only the flow of data and not the control input parameters.
The services below the top level are reusable components that have been factored out. Depending on your implementation, you could implement them as services or as library code components.
Just like with the business process decomposition, this tool allows you to "play" with inputs and outputs to see the effects. At some point, you will want to transcribe the decomposition into your design tool.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123983572000105
Developer Stories for Data Integration
Ralph Hughes , in Agile Data Warehousing Project Management, 2013
Forming backlogs with developer stories
A baseline DWBI reference architecture empowers a data architect and the rest of the lead-off team to easily translate user stories into a starter set of developer stories. The initial developer workshop results in a high-level data flow diagram for the project. An example of such a diagram is provided by Figure 6.4, which shows results for the sample user story we opened the chapter with. These data flow diagrams are planning artifacts for the team, meant solely to enable the participants to identify developer stories. Therefore they are drawn schematically to communicate the probable organization of the team's labor rather than in detail as they would be drawn if they were truly design diagrams. When the data architect drafts such a diagram, he is typically working at an aggregate, logical level of abstraction, where each box may represent one or several closely related data objects. He will decide in later design work whether each data object on this diagram translates into one or more physical tables. Fortunately, for an agile approach that prefers to operate with as much just-in-time requirements and design as possible, these schematic data flow diagrams often provide enough guidance that detailed design does not need to take place until the iteration in which each of the objects shown in this diagram will be constructed.
The sample in Figure 6.4 shows three separate threads, one each for customer, product, and revenue transactions. Each thread shows the progression of data across a set of swim lanes that correspond to the layers in the company's DWBI reference architecture. In the sample work presented in the last chapter, the enterprise architect informed the project architect that their company already has a repository of cleansed, integrated customer records, so, in Figure 6.4, the diagram's customer data thread starts in the integration layer with an existing data object. For product and revenue, the team will have to acquire its own data, so those threads start with "new" data objects drawn in the staging area. With these threads, the data architect reveals his plan to first integrate product and revenue records from the two source systems and then dimensionalize it. The semantic layer in the dashboard tool will link the resulting dimensions and facts to make it look like a single data object for the user's front-end applications.
The systems analyst exerted some influence on how the diagram for this example was drawn. Having learned from the enterprise and project architect that many of the company's customers actually pay for their services in quarterly installments, he realized that revenue data actually take two forms—what was billed to the customer every 3 months and the monthly break out of those quarterly payments. The data architect had suggested only a data object called revenue fact for the presentation layer. Realizing that the team would need to apply some complex business rules for allocating quarterly billings to monthly revenue bookings, the systems analysts asked that billed and booked revenue be represented as two separate data objects so that the team would treat them as separate developer stories. When the revenue data structures are later fully realized, each of these developer stories may pertain only to a distinct set of columns within a single physical table or may become a separate physical table each. Given that the developer workshop's data flow diagram serves as a work-planning rather than a design artifact, the data architect was happy to draw two data objects, understanding that he would be later free to realize the objects differently in the application's design if need be.
Once this schematic data flow diagram was complete, the team could see that viewing the single user story of our example through the lens of the DWBI reference architecture resulted in 11 developer stories. In practice, a multiplier of 10 to 1 or even higher is not uncommon for data integration projects. There is a further multiplier for translating developer stories into development tasks, as demonstrated earlier by Table 2.2, which listed the typical steps to building a common dimension object for a data warehouse. The combined effect of these two multipliers reveals why many teams that try to utilize generic Scrum can find data integration projects overwhelming. Working with only user stories, they may plan to deliver three or four stories per iteration, only to see the work explode by a factor of 200 or more tasks per story—far more than can be handled easily on a task board by a self-organized team of a half-dozen developers. Introducing the developer story and conducting development planning there bring the objectives of each iteration down to a reasonable number with no hidden multiplier involved, making the work far more manageable.
The aforementioned discussion focused on identifying developer stories during the initial stages of a project in order to scope the whole project and to provide an initial project estimate, as required by the inception phase of the release cycle. The team will repeat developer story workshops as necessary through remaining phases of the release cycle, often as part of the backlog grooming process that precedes each story conference. Typically the product owner will meet with the project architect a week or two before a story conference to groom the backlog of user stories. These two will then support the data architect and systems analyst as they convert the new and modified user stories at the top of the list into developer stories so that the team can conduct the next story conference with well-defined developer stories and form a reasonable commitment for the iteration. For those stories that the team will most likely work during the next sprint, the data architect and systems analyst also need to provide data and process designs at an 80/20 level of completion so that the programmers can move to coding without delay. These specifications do arise instantaneously; we will discuss in Chapter 8 how to create the time the data architect and systems analyst will need for this work.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123964632000065
Discrete-Time Control System Implementation Techniques
Antonio Barreiro , in Control and Dynamic Systems, 1995
Recursive Hankel factorization algorithm
This appendix contains the algorithm implementation, in the package MatLab, of the recursive Hankel factorization of Section IV. The program is organized in seven scripts (files formed by a sequence of instructions), as shown in the data flow diagram of Fig.1. The scripts lateral and central do the calculations. The script update stores the new computed entries. The script iterate increases the indices (N, i, j) and enlarges the matrices.
The objective of the code is to present a transcription of the algorithm, as simple as possible, to check the correctness of the example results.
In the program preparation, clearity and brevity of the Matlab code was preferred to efficient computation and memory use. The given code can be easily adapted to improve this aspect. The use of the zero entries pattern allows to store the data in more compact variables, and to reduce the number of multiplications (see Remark 4.1).
However, all these improvements would be obtained at the expense of a more careful use of subindices, that would make difficult to see how the algorithm works. Furthermore, memory use and computational cost are not critical aspects for small size problems, like the examples considered in the chapter. Based on these guidelines, the following code has been prepared:
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0090526706800496
Source: https://www.sciencedirect.com/topics/computer-science/data-flow-diagram
0 Comments