Saturday, 21 November 2015

Difference Between Combinational Circuit and Sequential Circuit



Set Associative cache memory with example

Set Associative Mapping
A compromise that provides strengths of both direct and associative approaches
• Cache is divided into a number of sets of lines
• Each set contains a fixed number of lines
• A given block maps to any line in a given set determined by that block’s address
— e.g. Block B can be in any line of set i
• e.g. 2 lines per set
— 2-way associative mapping
— A given block can be in one of 2 lines in only one set
• m = v * k
— Where m = number of lines in cache, v = number of sets and k = lines/set
— Lines in cache = sets * lines per set
• i = j modulo v
— Where I = set number and j = main memory block number
— Set number = block number % number of sets
• This is referred to as a “k-way” set associative mapping
• Block Bi can be mapped only into lines of set j.
K-Way Set Associative Cache Organization

Set Associative Mapping Example
• Assume 13 bit set number
• Block number in main memory is modulo 213 (0010 0000 0000 0000 = 2000h
• 000000, 002000, 004000, … map to same set
Set Associative Mapping Address Structure
• Cache control logic sees address as three fields: tag, set and word
• Use set field to determine cache set to look in
• Compare tag field to see if we have a hit
• e.g     — Address Tag Data Set number
— 1FF 7FFC 1FF 12345678 1FFF
— 001 7FFC 001 11223344 1FFF
• Tags are much smaller than fully associative memories and comparators for simultaneous lookup are much less expensive


Example :
 For more refer : 
                  http://aturing.umcs.maine.edu/~meadow/courses/cos335/COA04.pdf

Elements of Bus Design


Bus Types
Bus lines can be separated into two generic types: dedicated and multiplexed.

A dedicated bus line is permanently assigned either to one function or to physical subset of computer components.

The use of the same lines for multiple purposes is known as Multiplexing.

Bus Width
The width of the data bus has an impact on the system performance

The wider the data bus, the greater the number of bits can be transferred at one time.

The width of the address bus has an impact on the system capacity.

The wider the address bus, the greater the range of locations that can be referenced.

Method of Arbitration
Arbitration is the process for resolution of disputes: the process of resolving disputes between people or groups by referring them to a third party, either agreed on by them provided by law, who makes a judgment.

In a centralized scheme, a single hardware device, referred to as a bus controller or arbiter, is responsible for allocating time on the bus. In distributed scheme, there is no central controller. Rather each module contains access control logic and the modules act together to share the bus.

Timing
Refers to the way in which events are coordinated on the bus.

With synchronous timing, the occurrence of events on the bus is determined by clock.

With asynchronous timing, the occurrence of one event on a bus follows and depends on the occurrence of a previous event.

Data Transfer Type
Read
Write
Read-modify-write
Read-after-write
Block

Monday, 16 November 2015

Cocomo II Stages? Category of Projects For Which Cocomo Is applicable ?


COnstructive COst MOdel II (COCOMO® II) is a model that allows one to estimate the cost, effort, and schedule when planning a new software development activity. COCOMO® II is the latest major extension to the original COCOMO® (COCOMO® 81) model published in 1981. It consists of three submodels, each one offering increased fidelity the further along one is in the project planning and design process. Listed in increasing fidelity, these submodels are called the Applications Composition, Early Design, and Post-architecture models.

COCOMO II has three different models :
  • The Application Composition ModelSuitable for projects built with modern GUI-builder tools. Based on Object Points.
  • The Early Design ModelThis model is used to make rough estimates of a project's cost and duration before it is entire architecture is not determined. It uses a small set of new Cost Drivers, and new estimating equations. Based on Unadjusted Function Points or KSLOC.
    For the Early Design and Post Architecture Model :


    Where a = 2.5, SFj = scale factor, EMi = effort multiplier
    BRAK = Percentage code discarted due to requirement volatility
    ASLOC = size of adapted components
    AT = percents of components adapted
    ATPROD = Automatic Translation Productivity
    AAM = Adaptation Adjustment Multiplier
    COCOMO'II adjusts for the effects of reengineering in its effort estimate. When a project includes automatic translation, following list must be estimated :
    • Automatic translation productivity (ATPROD), estimated from previous development efforts
    • The size, in thousands of Source Lines of Code, of untranslated code (KSLOC) and of code to be translated (KASLOC) under this project.
    • The percentage of components being developed from reengineered software (ADAPT)
    • The percentage of components that are being automatically translated (AT).

    The effort equation is adjusted by 15 cost driver attributes in COCOMO'81, but COCOMO'II defines seven cost drivers (EM) for the Early Design estimate:
    • Personnel capability
    • Product reliability and complexity
    • Required reuse
    • Platform difficulty
    • Personnel experience
    • Facilities
    • Schedule constraints.
    Some of these effort multipliers are disaggregated into several multipliers in the Post-Architecture COCOMO'II model.
    COCOMO'II models software projects as exhibiting decreasing returns to scale. Decreasing returns are reflected in the effort equation by an exponent for SLOC greater than unity. This exponent varies among the three COCOMO'81 development modes (organic, semidetached, and embedded). COCOMO'II does not explicitly partition projects by development modes. Instead the power to which the size estimate is raised is determined by five scale factors:
    • Precedentedness (how novel the project is for the organization)
    • Development flexibility
    • Architecture/risk resolution
    • Team cohesion
    • Organization process maturity.
  • The Post-Architecture ModelThis is the most detailed COCOMO II model. It is used after project's overall architecture is developed. It has new cost drivers, new line counting rules, and new equations.

    Use of reengineered and automatically translated software is accounted for as in the Early Design equation (ASLOC, AT, ATPROD, and AAM). Breakage (BRAK), or the percentage of code thrown away due to requirements change is accounted for in 2.0. Reused software (RUF) is accounted for in the effort equation by adjusting the size by the adaptation adjustment multiplier (AAM). This multiplier is calculated from estimates of the percent of the design modified (DM), percent of the code modified (CM), integration effort modification (IM), software understanding (SU), and assessment and assimilation (AA). Seventeen effort multipliers are defined for the Post-Architecture model grouped into four categories:
    • Product factors
    • Platform factors
    • Personnel factors
    • Project factors
    These four categories parallel the four categories of COCOMO'81 - product attributes, computer attributes, personnel attributes and project attributes, respectively. Many of the seventeen factors of COCOMO'II are similar to the fifteen factors of COCOMO'81. The new factors introduced in COCOMO'II include required reusability, platform experience, language and tool experience, personnel continuity and turnover, and a factor for multi-site development. Computer turnaround time, the use of modern programming practices, virtual machine experience, and programming language experience, which were effort multipliers in COCOMO'81, are removed in COCOMO'II.
    A single development schedule estimate is defined for all three COCOMO'II models :


    Where c = 3, SFj scale factor and SCED% = schedule compression/expansion parameter.
Category of Projects For  Which Cocomo Is applicable:
COCOMO® II can be used for the following major decision situations
  • Making investment or other financial decisions involving a software development effort
  • Setting project budgets and schedules as a basis for planning and control
  • Deciding on or negotiating tradeoffs among software cost, schedule, functionality, performance or quality factors
  • Making software cost and schedule risk management decisions
  • Deciding which parts of a software system to develop, reuse, lease, or purchase
  • Making legacy software inventory decisions: what parts to modify, phase out, outsource, etc
  • Setting mixed investment strategies to improve organization's software capability, via reuse, tools, process maturity, outsourcing, etc
  • Deciding how to implement a process improvement strategy, such as that provided in the SEI CMM


State and define McCall's quality factor? How is the quality of the software product find out using FURPS quality factor ?

McCall's Quality Model - 1977

Jim McCall produced this model for the US Air Force and the intention was to bridge the gap between users and developers. He tried to map the user view with the developer's priority.

McCall identified three main perspectives for characterizing the quality attributes of a software product.

These perspectives are:-
  1. Product revision (ability to change).
  2. Product transition (adaptability to new environments).
  3. Product operations (basic operational characteristics).


Product revision
The product revision perspective identifies quality factors that influence the ability to change the software product, these factors are:-
  • Maintainability, the ability to find and fix a defect.
  • Flexibility, the ability to make changes required as dictated by the business.
  • Testability, the ability to Validate the software requirements.


Product transition 
The product transition perspective identifies quality factors that influence the ability to adapt the software to new environments:-
  • Portability, the ability to transfer the software from one environment to another.
  • Reusability, the ease of using existing software components in a different context.
  • Interoperability, the extent, or ease, to which software components work together.


Product operations 
The product operations perspective identifies quality factors that influence the extent to which the software fulfils its specification:-
  • Correctness, the functionality matches the specification.
  • Reliability, the extent to which the system fails.
  • Efficiency, system resource (including cpu, disk, memory, network) usage.
  • Integrity, protection from unauthorized access.
  • Usability, ease of use.


In total McCall identified the 11 quality factors broken down by the 3 perspectives, as listed above.
For each quality factor McCall defined one or more quality criteria (a way of measurement), in this way an overall quality assessment could be made of a given software product by evaluating the criteria for each factor. 

For example the Maintainability quality factor would have criteria of simplicity, conciseness and modularity.

FURPS is an acronym representing a model for classifying software quality attributes (functional and non-functional requirements):
• Functionality : Feature set, Capabilities, Generality, Security
• Usability:  Human factors, Aesthetics, Consistency, Documentation
• Reliability:  Frequency/severity of failure, Recoverability, Predictability, Accuracy, Mean time to failure
• Performance:  Speed, Efficiency, Resource consumption, Throughput, Response time
• Supportability:  Testability, Extensibility, Adaptability, Maintainability, Compatibility, Configurability, Serviceability, Installability, Localizability, Portability

Hence, FURPS is a hierarchical definition model. The first four quality factors (FURP) are more aimed at the user and operator of the software, while the last quality factor (S) is more targeted at the developers, testers and maintainers. FURPS gives an alternative decomposition to the standard ISO/IEC 25010 which we will discuss in detail in Sect. 2.3. The main aim of FURPS is a decomposition and checklist for quality requirements. A software engineer can go through this list of quality factors and check with the stakeholders to define corresponding qualities. Therefore, it defines quality as basis for requirements. In addition, Grady and Caswell [77] describe various metrics that can be related to the quality factors for evaluating them. The main purpose of FURPS, however, is to define quality

What do You Understand By Software Engineering ? Explain the Evolution Role Of Software ?

Software engineering is the study and an application of engineering to the design, development and maintenance of software.

Defination Of SE :
"the systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software".

"the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software"

Evolution Role Of Software:

Today, software takes on a dual role. It is a product and, at the same time, the vehicle for delivering a product. As a product, it delivers the computing potential embodied by computer hardware or, more broadly, a network of computers that are accessible by local hardware. Whether it resides within a cellular phone or operates inside a mainframe computer, software is an information transformer—producing, managing, acquiring, modifying, displaying, or transmitting information that can be as simple as a single bit or as complex as a multimedia presentation. As the vehicle used to deliver the product, software acts as the basis for the control of the computer (operating systems), the communication of information (networks), and the creation and control of other programs (software tools and environments). Software delivers the most important product of our time—information.

Software transforms personal data (e.g., an individual’s financial transactions) so that the data can be more useful in a local context; it manages business information to enhance competitiveness; it provides a gateway to worldwide information networks (e.g., Internet) and provides the means for acquiring information in all of its forms.

The role of computer software has undergone significant change over a time span of little more    than 50 years. Dramatic improvements in hardware performance, profound changes in computing architectures, vast increases in memory and storage capacity, and a wide variety of exotic input and output options have all precipitated more sophisticated and complex computer-based systems.

The lone programmer of an earlier era has been replaced by a team of software specialists, each focusing on one part of the technology required to deliver a complex application.

And yet, the same questions asked of the lone programmer are being asked when modern computer-based systems are built:

1)Why does it take so long to get software finished?
2)Why are development costs so high?
3)Why can't we find all the errors before we give the software to customers?
4)Why do we continue to have difficulty in measuring progress as software is being developed?

Agile Modeling and how it is related to XP ?

Agile Modeling is a practice-based methodology for effective modeling and documentation 
of software-based systems. Simply put, Agile Modeling is a collection of values, principles, and practices for modeling software that can be applied on a software development project in an effective and light-weight manner. Agile Modeling is a supplement to other Agile Methodologies such as:
·         Extreme Programming
·         Select Perspective
·         SCRUM
The principles and values of Agile Modeling practises help to mitigate the criticisms of Agile Software Development. The principle Maximize Stakeholder Value inspires the developer to collaborate with the customer in providing an adequate level of documentation.
The principle Model With Others leads to a design which is the best fit for the customer's needs.

Limitations

There is significant dependence on face-to-face communication and customer collaboration. Agile Modeling is difficult to apply where there are large teams, team members are not co-located, and people skills are lacking. However, Agile Modeling can be scaled with agile architecture techniques.

Core Principles:

  • Model With A Purpose. Many developers worry about whether their artifacts -- such as models, source code, or documents -- are detailed enough or if they are too detailed, or similarly if they are sufficiently accurate. What they're not doing is stepping back and asking why they're creating the artifact in the first place and who they are creating it for. With respect to modeling, perhaps you need to understand an aspect of your software better, perhaps you need to communicate your approach to senior management to justify your project, or perhaps you need to create documentation that describes your system to the people who will be operating and/or maintaining/evolving it over time. If you cannot identify why and for whom you are creating a model then why are you bothering to work on it all? Your first step is to identify a valid purpose for creating a model and the audience for that model, then based on that purpose and audience develop it to the point where it is both sufficiently accurate and sufficiently detailed. Once a model has fulfilled its goals you're finished with it for now and should move on to something else, such as writing some code to show that the model works. This principle also applies to a change to an existing model: if you are making a change, perhaps applying a known pattern, then you should have a valid reason to make that change (perhaps to support a new requirement or to refactor your work to something cleaner). An important implication of this principle is that you need to know your audience, even when that audience is yourself. For example, if you are creating a model for maintenance developers, what do they really need? Do they need a 500 page comprehensive document or would a 10 page overview of how everything works be sufficient? Don't know? Go talk to them and find out.
  • Maximize Stakeholder ROI. Your project stakeholders are investing resources -- time, money, facilities, and so on -- to have software developed that meets their needs. Stakeholders deserve to invest their resources the best way possible and not to have resources frittered away by your team. Furthermore, they deserve to have the final say in how those resources are invested or not invested. If it was your resources, would you want it any other way? Note: In AM v1 this was originally called "Maximize Stakeholder Investment". Over time we realized that this term wasn't right because it sounded like we were saying you needed to maximize the amount of money spent, which wasn't the message.
  • Travel Light. Every artifact that you create, and then decide to keep, will need to be maintained over time. If you decide to keep seven models, then whenever a change occurs (a new/updated requirement, a new approach is taken by your team, a new technology is adopted, ...) you will need to consider the impact of that change on all seven models and then act accordingly. If you decide to keep only three models then you clearly have less work to perform to support the same change, making you more agile because you are traveling lighter. Similarly, the more complex/detailed your models are, the more likely it is that any given change will be harder to accomplish (the individual model is "heavier" and is therefore more of a burden to maintain). Every time you decide to keep a model you trade-off agility for the convenience of having that information available to your team in an abstract manner (hence potentially enhancing communication within your team as well as with project stakeholders). Never underestimate the seriousness of this trade-off. Someone trekking across the desert will benefit from a map, a hat, good boots, and a canteen of water they likely won't make it if they burden themselves with hundreds of gallons of water, a pack full of every piece of survival gear imaginable, and a collection of books about the desert. Similarly, a development team that decides to develop and maintain a detailed requirements document, a detailed collection of analysis models, a detailed collection of architectural models, and a detailed collection of design models will quickly discover they are spending the majority of their time updating documents instead of writing source code.
  • Multiple Models. You potentially need to use multiple models to develop software because each model describes a single aspect of your software. “What models are potentially required to build modern-day business applications?” Considering the complexity of modern day software, you need to have a wide range of techniques in your intellectual modeling toolkit to be effective (see Modeling Artifacts for AM for a start at a list and Agile Models Distilled for detailed descriptions). An important point is that you don't need to develop all of these models for any given system, but that depending on the exact nature of the software you are developing you will require at least a subset of the models. Different systems, different subsets. Just like every fixit job at home doesn't require you to use every tool available to you in your toolbox, over time the variety of jobs you perform will require you to use each tool at some point. Just like you use some tools more than others, you will use some types of models more than others. For more details regarding the wide range of modeling artifacts available to you, far more than those of the UML as I show in the essay Be Realistic About the UML.
  • Rapid Feedback. The time between an action and the feedback on that action is critical. By working with other people on a model, particularly when you are working with a shared modeling technology (such as a whiteboard, CRC cards, or essential modeling materials such as sticky notes) you are obtaining near-instant feedback on your ideas. Working closely with your customer, to understand the requirements, to analyze those requirements, or to develop a user interface that meets their needs, provides opportunities for rapid feedback.
  • Assume Simplicity. As you develop you should assume that the simplest solution is the best solution. Don't overbuild your software, or in the case of AM don't depict additional features in your models that you don't need today. Have the courage that you don't need to over-model your system today, that you can model based on your existing requirements today and refactor your system in the future when your requirements evolve. Keep your models as simple as possible.
  • Embrace Change. Requirements evolve over time. People's understanding of the requirements change over time. Project stakeholders can change as your project moves forward, new people are added and existing ones can leave. Project stakeholders can change their viewpoints as well, potentially changing the goals and success criteria for your effort. The implication is that your project's environment changes as your efforts progress, and that as a result your approach to development must reflect this reality.
  • Incremental Change. An important concept to understand with respect to modeling is that you don't need to get it right the first time, in fact, it is very unlikely that you could do so even if you tried. Furthermore, you do not need to capture every single detail in your models, you just need to get it good enough at the time. Instead of futilely trying to develop an all encompassing model at the start, you instead can put a stake in the ground by developing a small model, or perhaps a high-level model, and evolve it over time (or simply discard it when you no longer need it) in an incremental manner.
  • Quality Work. Nobody likes sloppy work. The people doing the work don't like it because it's something they can't be proud of, the people coming along later to refactor the work (for whatever reason) don't like it because it's harder to understand and to update, and the end users won't like the work because it's likely fragile and/or doesn't meet their expectations.
  • Working Software Is Your Primary Goal. The goal of software development is to produce high-quality working software that meets the needs of your project stakeholders in an effective manner. The primary goal is not to produce extraneous documentation, extraneous management artifacts, or even models. Any activity that does not directly contribute to this goal should be questioned and avoided if it cannot be justified in this light.
  • Enabling The Next Effort Is Your Secondary Goal. Your project can still be considered a failure even when your team delivers a working system to your users – part of fulfilling the needs of your project stakeholders is to ensure that your system robust enough so that it can be extended over time. As Alistair Cockburn likes to say, when you are playing the software development game your secondary goal is to setup to play the next game. Your next effort may be the development of the next major release of your system or it may simply be the operations and support of the current version you are building. To enable it you will not only want to develop quality software but also create just enough documentation and supporting materials so that the people playing the next game can be effective. Factors that you need to consider include whether members of your existing team will be involved with the next effort, the nature of the next effort itself, and the importance of the next effort to your organization. In short, when you are working on your system you need to keep an eye on the future.


Supplementary Principles:

  • Content Is More Important Than Representation. Any given model could have several ways to represent it. For example, a UI specification could be created using Post-It notes on a large sheet of paper (an essential or low-fidelity prototype), as a sketch on paper or a whiteboard, as a "traditional" prototype built using a prototyping tool or programming language, or as a formal document including both a visual representation as well as a textual description of the UI. An interesting implication is that a model does not need to be a document. Even a complex set of diagrams created using a CASE tool may not become part of a document, instead they are used as inputs into other artifacts, very likely source code, but never formalized as official documentation. The point is that you take advantage of the benefits of modeling without incurring the costs of creating and maintaining documentation.
  • Open And Honest Communication. People need to be free, and to perceive that they are free, to offer suggestions. This includes ideas pertaining to one or more models, perhaps someone has a new way to approach a portion of the design or has a new insight regarding a requirement; the delivery of bad news such as being behind schedule; or simply the current status of their work. Open and honest communication enables people to make better decisions because the quality of the information that they are basing them on is more accurate.
For Relation with XP refer : http://agilemodeling.com/essays/agileModelingXP.htm