Saturday, 21 November 2015
Set Associative cache memory with example
Set Associative Mapping
A compromise that provides strengths of
both direct and associative approaches
• Cache is divided into a number of sets
of lines
• Each set contains a fixed number of
lines
• A given block maps to any line in a
given set determined by that block’s address
— e.g. Block B can
be in any line of set i
• e.g. 2 lines per set
— 2-way associative
mapping
— A given block
can be in one of 2 lines in only one set
• m = v * k
— Where m = number
of lines in cache, v = number of sets and k = lines/set
— Lines in cache =
sets * lines per set
• i = j modulo v
— Where I = set
number and j = main memory block number
— Set number =
block number % number of sets
• This is referred to as a “k-way” set
associative mapping
• Block Bi can be mapped only into lines
of set j.
K-Way
Set Associative Cache Organization
Set Associative Mapping Example
• Assume 13 bit set number
• Block number in main memory is modulo
213 (0010 0000 0000 0000 = 2000h
• 000000, 002000, 004000, … map to same
set
Set
Associative Mapping Address Structure
• Cache control logic sees address as three fields: tag, set and word
• Cache control logic sees address as three fields: tag, set and word
• Use set field to determine cache set to
look in
• Compare tag field to see if we have a
hit
• e.g —
Address Tag Data Set number
— 1FF 7FFC 1FF
12345678 1FFF
— 001 7FFC 001
11223344 1FFF
• Tags are much smaller than fully
associative memories and comparators for simultaneous lookup are much less
expensive
Example
:
For more refer :
http://aturing.umcs.maine.edu/~meadow/courses/cos335/COA04.pdfElements of Bus Design
Bus Types
Bus lines can be separated into two generic types: dedicated and multiplexed.
A dedicated bus line is permanently assigned either to one function or to physical subset of computer components.
The use of the same lines for multiple purposes is known as Multiplexing.
Bus Width
The width of the data bus has an impact on the system performance
The wider the data bus, the greater the number of bits can be transferred at one time.
The width of the address bus has an impact on the system capacity.
The wider the address bus, the greater the range of locations that can be referenced.
Method of Arbitration
Arbitration is the process for resolution of disputes: the process of resolving disputes between people or groups by referring them to a third party, either agreed on by them provided by law, who makes a judgment.
In a centralized scheme, a single hardware device, referred to as a bus controller or arbiter, is responsible for allocating time on the bus. In distributed scheme, there is no central controller. Rather each module contains access control logic and the modules act together to share the bus.
Timing
Refers to the way in which events are coordinated on the bus.
With synchronous timing, the occurrence of events on the bus is determined by clock.
With asynchronous timing, the occurrence of one event on a bus follows and depends on the occurrence of a previous event.
Data Transfer Type
Read
Write
Read-modify-write
Read-after-write
Block
Monday, 16 November 2015
Cocomo II Stages? Category of Projects For Which Cocomo Is applicable ?
COnstructive COst MOdel II (COCOMO® II) is a model that allows one to estimate the cost, effort, and schedule when planning a new software development activity. COCOMO® II is the latest major extension to the original COCOMO® (COCOMO® 81) model published in 1981. It consists of three submodels, each one offering increased fidelity the further along one is in the project planning and design process. Listed in increasing fidelity, these submodels are called the Applications Composition, Early Design, and Post-architecture models.
COCOMO II has three different models :
- The Application Composition ModelSuitable for projects built with modern GUI-builder tools. Based on Object Points.
- The Early Design ModelThis model is used to make rough estimates of a project's cost and duration before it is entire architecture is not determined. It uses a small set of new Cost Drivers, and new estimating equations. Based on Unadjusted Function Points or KSLOC.
For the Early Design and Post Architecture Model :
Where a = 2.5, SFj = scale factor, EMi = effort multiplier
BRAK = Percentage code discarted due to requirement volatility
ASLOC = size of adapted components
AT = percents of components adapted
ATPROD = Automatic Translation Productivity
AAM = Adaptation Adjustment Multiplier
COCOMO'II adjusts for the effects of reengineering in its effort estimate. When a project includes automatic translation, following list must be estimated :
- Automatic translation productivity (ATPROD), estimated from previous development efforts
- The size, in thousands of Source Lines of Code, of untranslated code (KSLOC) and of code to be translated (KASLOC) under this project.
- The percentage of components being developed from reengineered software (ADAPT)
- The percentage of components that are being automatically translated (AT).
The effort equation is adjusted by 15 cost driver attributes in COCOMO'81, but COCOMO'II defines seven cost drivers (EM) for the Early Design estimate:
- Personnel capability
- Product reliability and complexity
- Required reuse
- Platform difficulty
- Personnel experience
- Facilities
- Schedule constraints.
COCOMO'II models software projects as exhibiting decreasing returns to scale. Decreasing returns are reflected in the effort equation by an exponent for SLOC greater than unity. This exponent varies among the three COCOMO'81 development modes (organic, semidetached, and embedded). COCOMO'II does not explicitly partition projects by development modes. Instead the power to which the size estimate is raised is determined by five scale factors: - Precedentedness (how novel the project is for the organization)
- Development flexibility
- Architecture/risk resolution
- Team cohesion
- Organization process maturity.
- The Post-Architecture ModelThis is the most detailed COCOMO II model. It is used after project's overall architecture is developed. It has new cost drivers, new line counting rules, and new equations.
Use of reengineered and automatically translated software is accounted for as in the Early Design equation (ASLOC, AT, ATPROD, and AAM). Breakage (BRAK), or the percentage of code thrown away due to requirements change is accounted for in 2.0. Reused software (RUF) is accounted for in the effort equation by adjusting the size by the adaptation adjustment multiplier (AAM). This multiplier is calculated from estimates of the percent of the design modified (DM), percent of the code modified (CM), integration effort modification (IM), software understanding (SU), and assessment and assimilation (AA). Seventeen effort multipliers are defined for the Post-Architecture model grouped into four categories:- Product factors
- Platform factors
- Personnel factors
- Project factors
A single development schedule estimate is defined for all three COCOMO'II models :
Where c = 3, SFj scale factor and SCED% = schedule compression/expansion parameter.
Category of Projects For Which Cocomo Is applicable:
COCOMO® II can be used for the following major decision situations
- Making investment or other financial decisions involving a software development effort
- Setting project budgets and schedules as a basis for planning and control
- Deciding on or negotiating tradeoffs among software cost, schedule, functionality, performance or quality factors
- Making software cost and schedule risk management decisions
- Deciding which parts of a software system to develop, reuse, lease, or purchase
- Making legacy software inventory decisions: what parts to modify, phase out, outsource, etc
- Setting mixed investment strategies to improve organization's software capability, via reuse, tools, process maturity, outsourcing, etc
- Deciding how to implement a process improvement strategy, such as that provided in the SEI CMM
State and define McCall's quality factor? How is the quality of the software product find out using FURPS quality factor ?
McCall's Quality
Model - 1977
Jim McCall produced this model for the US Air Force and the intention was to bridge the gap between users and developers. He tried to map the user view with the developer's priority.
McCall identified three main perspectives for characterizing the quality attributes of a software product.
These perspectives are:-
- Product revision (ability to change).
- Product transition (adaptability to new environments).
- Product operations (basic operational characteristics).
Product revision
The product revision perspective identifies quality factors that influence the ability to change the software product, these factors are:-
The product revision perspective identifies quality factors that influence the ability to change the software product, these factors are:-
- Maintainability, the ability to find and fix a defect.
- Flexibility, the ability to make changes required as dictated by the business.
- Testability, the ability to Validate the software requirements.
Product transition
The product transition perspective identifies quality factors that influence the ability to adapt the software to new environments:-
The product transition perspective identifies quality factors that influence the ability to adapt the software to new environments:-
- Portability, the ability to transfer the software from one environment to another.
- Reusability, the ease of using existing software components in a different context.
- Interoperability, the extent, or ease, to which software components work together.
Product operations
The product operations perspective identifies quality factors that influence the extent to which the software fulfils its specification:-
The product operations perspective identifies quality factors that influence the extent to which the software fulfils its specification:-
- Correctness, the functionality matches the specification.
- Reliability, the extent to which the system fails.
- Efficiency, system resource (including cpu, disk, memory, network) usage.
- Integrity, protection from unauthorized access.
- Usability, ease of use.
In total McCall
identified the 11 quality factors broken down by the 3 perspectives, as listed
above.
For each quality factor McCall defined one or more quality criteria (a way of measurement), in this way an overall quality assessment could be made of a given software product by evaluating the criteria for each factor.
For each quality factor McCall defined one or more quality criteria (a way of measurement), in this way an overall quality assessment could be made of a given software product by evaluating the criteria for each factor.
For example the Maintainability quality factor would have criteria of simplicity, conciseness and modularity.
FURPS is an acronym representing a model for
classifying software quality attributes (functional and non-functional
requirements):
• Functionality : Feature set, Capabilities, Generality,
Security
• Usability: Human factors, Aesthetics, Consistency,
Documentation
• Reliability: Frequency/severity of failure,
Recoverability, Predictability, Accuracy, Mean time to failure
• Performance: Speed, Efficiency, Resource consumption,
Throughput, Response time
• Supportability: Testability, Extensibility,
Adaptability, Maintainability, Compatibility, Configurability, Serviceability,
Installability, Localizability, Portability
Hence, FURPS is a hierarchical definition model. The
first four quality factors (FURP) are more aimed at the user and operator of
the software, while the last quality factor (S) is more targeted at the
developers, testers and maintainers. FURPS gives an alternative decomposition
to the standard ISO/IEC 25010 which we will discuss in detail in Sect. 2.3. The
main aim of FURPS is a decomposition and checklist for quality requirements. A
software engineer can go through this list of quality factors and check with
the stakeholders to define corresponding qualities. Therefore, it defines
quality as basis for requirements. In addition, Grady and Caswell [77] describe
various metrics that can be related to the quality factors for evaluating them.
The main purpose of FURPS, however, is to define quality
What do You Understand By Software Engineering ? Explain the Evolution Role Of Software ?
Software engineering is the study
and an application of engineering to the design, development and maintenance of software.
Defination Of SE :
"the systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software".
"the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software"
Evolution Role Of Software:
Today, software
takes on a dual role. It is a product and, at the same time, the vehicle for
delivering a product. As a product, it delivers the computing potential
embodied by computer hardware or, more broadly, a network of computers that are
accessible by local hardware. Whether it resides within a cellular phone or
operates inside a mainframe computer, software is an information
transformer—producing, managing, acquiring, modifying, displaying, or
transmitting information that can be as simple as a single bit or as complex as
a multimedia presentation. As the vehicle used to deliver the product, software
acts as the basis for the control of the computer (operating systems), the
communication of information (networks), and the creation and control of other
programs (software tools and environments). Software delivers the most
important product of our time—information.
Software transforms personal data (e.g., an individual’s financial transactions) so that the data can be more useful in a local context; it manages business information to enhance competitiveness; it provides a gateway to worldwide information networks (e.g., Internet) and provides the means for acquiring information in all of its forms.
The role of computer software has undergone significant change over a time span of little more than 50 years. Dramatic improvements in hardware performance, profound changes in computing architectures, vast increases in memory and storage capacity, and a wide variety of exotic input and output options have all precipitated more sophisticated and complex computer-based systems.
Software transforms personal data (e.g., an individual’s financial transactions) so that the data can be more useful in a local context; it manages business information to enhance competitiveness; it provides a gateway to worldwide information networks (e.g., Internet) and provides the means for acquiring information in all of its forms.
The role of computer software has undergone significant change over a time span of little more than 50 years. Dramatic improvements in hardware performance, profound changes in computing architectures, vast increases in memory and storage capacity, and a wide variety of exotic input and output options have all precipitated more sophisticated and complex computer-based systems.
The lone programmer of an earlier era has been replaced by a team of software specialists, each focusing on one part of the technology required to deliver a complex application.
And yet, the same questions asked of the lone programmer are being asked when modern computer-based systems are built:
1)Why does it take so long to get software finished?
2)Why are development costs so high?
3)Why can't we find all the errors before we give the software to customers?
4)Why do we continue to have difficulty in measuring progress as software is being developed?
Agile Modeling and how it is related to XP ?
Agile Modeling is a practice-based
methodology for effective modeling and documentation
of software-based systems. Simply
put, Agile Modeling is a collection of values, principles, and practices for
modeling software that can be applied on a software development project in an
effective and light-weight manner. Agile Modeling is a supplement to other
Agile Methodologies such as:
·
Extreme Programming
·
Select Perspective
·
SCRUM
|
||
The principles and values of Agile
Modeling practises help to mitigate the criticisms of Agile Software
Development. The principle Maximize Stakeholder Value inspires the developer to
collaborate with the customer in providing an adequate level of documentation.
The principle Model With Others
leads to a design which is the best fit for the customer's needs.
Limitations
There is significant dependence on
face-to-face communication and customer collaboration. Agile Modeling is
difficult to apply where there are large teams, team members are not
co-located, and people skills are lacking. However, Agile Modeling can be scaled
with agile architecture techniques.
Core Principles:
- Model
With A Purpose. Many developers worry about whether
their artifacts -- such as models, source code, or documents -- are
detailed enough or if they are too detailed, or similarly if they are sufficiently
accurate. What they're not doing is stepping back and asking why they're
creating the artifact in the first place and who they are creating it for.
With respect to modeling, perhaps you need to understand an aspect of your
software better, perhaps you need to communicate your approach to senior
management to justify your project, or perhaps you need to create
documentation that describes your system to the people who will be
operating and/or maintaining/evolving it over time. If you cannot identify
why and for whom you are creating a model then why are you bothering to
work on it all? Your first step is to identify a valid purpose for
creating a model and the audience for that model, then based on that
purpose and audience develop it to the point where it is both sufficiently
accurate and sufficiently detailed. Once a model has fulfilled its goals
you're finished with it for now and should move on to something else, such
as writing some code to show that the model works. This principle also
applies to a change to an existing model: if you are making a change,
perhaps applying a known pattern, then you should have a valid reason to
make that change (perhaps to support a new requirement or to refactor your
work to something cleaner). An important implication of this principle is
that you need to know your audience, even when that audience is yourself.
For example, if you are creating a model for maintenance developers, what
do they really need? Do they need a 500 page comprehensive document or
would a 10 page overview of how everything works be sufficient? Don't
know? Go talk to them and find out.
- Maximize Stakeholder ROI. Your project stakeholders are investing resources -- time,
money, facilities, and so on -- to have software developed that meets
their needs. Stakeholders deserve to invest their resources the best way
possible and not to have resources frittered away by your team.
Furthermore, they deserve to have the final say in how those resources are
invested or not invested. If it was your resources, would you want it any
other way? Note: In AM v1 this was originally called "Maximize
Stakeholder Investment". Over time we realized that this term wasn't
right because it sounded like we were saying you needed to maximize the
amount of money spent, which wasn't the message.
- Travel Light. Every artifact that you create, and then decide to keep, will
need to be maintained over time. If you decide to keep seven models, then
whenever a change occurs (a new/updated requirement, a new approach is
taken by your team, a new technology is adopted, ...) you will need to
consider the impact of that change on all seven models and then act
accordingly. If you decide to keep only three models then you clearly have
less work to perform to support the same change, making you more agile
because you are traveling lighter. Similarly, the more complex/detailed
your models are, the more likely it is that any given change will be
harder to accomplish (the individual model is "heavier" and is
therefore more of a burden to maintain). Every time you decide to keep a
model you trade-off agility for the convenience of having that information
available to your team in an abstract manner (hence potentially enhancing
communication within your team as well as with project stakeholders).
Never underestimate the seriousness of this trade-off. Someone trekking
across the desert will benefit from a map, a hat, good boots, and a
canteen of water they likely won't make it if they burden themselves with
hundreds of gallons of water, a pack full of every piece of survival gear imaginable,
and a collection of books about the desert. Similarly, a development team
that decides to develop and maintain a detailed requirements document, a
detailed collection of analysis models, a detailed collection of
architectural models, and a detailed collection of design models will
quickly discover they are spending the majority of their time updating
documents instead of writing source code.
- Multiple
Models. You potentially need to use multiple
models to develop software because each model describes a single aspect of
your software. “What models are potentially required to build modern-day
business applications?” Considering the complexity of modern day software,
you need to have a wide range of techniques in your intellectual modeling
toolkit to be effective (see Modeling Artifacts for AM for a start at a list and Agile Models Distilled for detailed descriptions). An important
point is that you don't need to develop all of these models for any given
system, but that depending on the exact nature of the software you are
developing you will require at least a subset of the models. Different
systems, different subsets. Just like every fixit job at home doesn't
require you to use every tool available to you in your toolbox, over time
the variety of jobs you perform will require you to use each tool at some
point. Just like you use some tools more than others, you will use some
types of models more than others. For more details regarding the wide
range of modeling artifacts available to you, far more than those of the
UML as I show in the essay Be Realistic About the UML.
- Rapid
Feedback. The time between an action and the
feedback on that action is critical. By working with other people on a
model, particularly when you are working with a shared modeling technology
(such as a whiteboard, CRC cards, or essential modeling materials such as
sticky notes) you are obtaining near-instant feedback on your ideas.
Working closely with your customer, to understand the requirements, to
analyze those requirements, or to develop a user interface that meets
their needs, provides opportunities for rapid feedback.
- Assume
Simplicity. As you develop you should assume
that the simplest solution is the best solution. Don't overbuild your
software, or in the case of AM don't depict additional features in your
models that you don't need today. Have the courage that you don't need to
over-model your system today, that you can model based on your existing
requirements today and refactor your system in the future when your
requirements evolve. Keep your models as simple as possible.
- Embrace Change. Requirements evolve over time. People's
understanding of the requirements change over time. Project stakeholders
can change as your project moves forward, new people are added and
existing ones can leave. Project stakeholders can change their viewpoints
as well, potentially changing the goals and success criteria for your
effort. The implication is that your project's environment changes as your
efforts progress, and that as a result your approach to development must
reflect this reality.
You need an agile approach to change management.
- Incremental
Change. An important concept to understand
with respect to modeling is that you don't need to get it right the first
time, in fact, it is very unlikely that you could do so even if you tried.
Furthermore, you do not need to capture every single detail in your
models, you just need to get it good enough at the time. Instead of
futilely trying to develop an all encompassing model at the start, you
instead can put a stake in the ground by developing a small model, or
perhaps a high-level model, and evolve it over time (or simply discard it
when you no longer need it) in an incremental manner.
- Quality Work. Nobody likes sloppy work. The people doing the work don't like
it because it's something they can't be proud of, the people coming along
later to refactor the work (for whatever reason) don't like it because
it's harder to understand and to update, and the end users won't like the
work because it's likely fragile and/or doesn't meet their expectations.
- Working
Software Is Your Primary Goal.
The goal of software development is to produce high-quality working
software that meets the needs of your project stakeholders in an effective
manner. The primary goal is not to produce extraneous documentation,
extraneous management artifacts, or even models. Any activity that does
not directly contribute to this goal should be questioned and avoided if
it cannot be justified in this light.
- Enabling The Next Effort Is Your Secondary Goal. Your project can still be considered
a failure even when your team delivers a working system to your users –
part of fulfilling the needs of your project stakeholders is to ensure
that your system robust enough so that it can be extended over time. As Alistair Cockburn likes to
say, when you are playing the software development game your secondary
goal is to setup to play the next game. Your
next effort may be the development of the next major release of your
system or it may simply be the operations and support of the current
version you are building. To
enable it you will not only want to develop quality software but also
create just enough documentation and supporting materials so that the
people playing the next game can be effective. Factors that you need to consider
include whether members of your existing team will be involved with the
next effort, the nature of the next effort itself, and the importance of
the next effort to your organization. In
short, when you are working on your system you need to keep an eye on the
future.
Supplementary Principles:
- Content Is More Important Than Representation. Any given model could have several
ways to represent it. For example, a UI specification could be created
using Post-It notes on a large sheet of paper (an essential or
low-fidelity prototype), as a sketch on paper or a whiteboard, as a
"traditional" prototype built using a prototyping tool or
programming language, or as a formal document including both a visual
representation as well as a textual description of the UI. An interesting
implication is that a model does not need to be a document. Even a complex
set of diagrams created using a CASE tool may not become part of a
document, instead they are used as inputs into other artifacts, very
likely source code, but never formalized as official documentation. The
point is that you take advantage of the benefits of modeling without
incurring the costs of creating and maintaining documentation.
- Open
And Honest Communication. People need to be free, and to
perceive that they are free, to offer suggestions. This includes ideas
pertaining to one or more models, perhaps someone has a new way to
approach a portion of the design or has a new insight regarding a
requirement; the delivery of bad news such as being behind schedule; or
simply the current status of their work. Open and honest communication
enables people to make better decisions because the quality of the
information that they are basing them on is more accurate.
For Relation with XP refer : http://agilemodeling.com/essays/agileModelingXP.htm
Cost Benefit Analysis
Cost–benefit analysis (CBA), sometimes called benefit–cost analysis (BCA), is a systematic approach to estimating the strengths and weaknesses of alternatives that satisfy transactions, activities or functional requirements for a business. It is a technique that is used to determine options that provide the best approach for the adoption and practice in terms of benefits in labor, time and cost savings etc. The CBA is also defined as a systematic process for calculating and comparing benefits and costs of a project, decision or government policy (hereafter, "project").
- Determine on-going staffing costs - an analysis of the operating costs (actual versus proposed). This is based on the rough design of the planned system and its anticipated impact on the company.
- Estimated savings and expenses by user department areas (e.g., Manufacturing, Marketing, etc.). This describes the on-going costs associated with the system, as well as the anticipated savings.
- Itemized benefits - both tangible and intangible. In the systems world, the biggest benefits are typically intangible in nature. A benefit is typically written beginning with a transitive verb, such as improve, maximize, minimize, etc. Substantiate your claim; do not simply say Improved cash flow; instead, say something like, Improved cash flow through tighter control over inventory and faster response from Production.
- Break Even point - the calculated point in time where cost savings match accumulated development expenses. It is normally calculated as: Break Even Point = Investment ÷ Average Annual Savings For example, where the project Investment was $49,215 and the Average Annual Savings was $22,861, the Break Even Point is 2.15 years (26 months)
- Calculate Return On Investment (ROI) - the ratio of projected cost savings versus amount invested. It is typically calculated as: ROI = (Average Annual Savings ÷ Investment) X 100 Using the figures from above, the ROI is 46.4%
Explain RAD and Waterfall model ?
Defination RAD :
RAD model is Rapid Application Development model. It is a type of incremental model. In RAD model the components or functions are developed in parallel as if they were mini projects. The developments are time boxed, delivered and then assembled into a working prototype. This can quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements.
Diagram of RAD-Model:
The phases in the rapid application development (RAD) model are:
Business modeling: The information flow is identified between various business functions.
Data modeling: Information gathered from business modeling is used to define data objects that are needed for the business.
Process modeling: Data objects defined in data modeling are converted to achieve the business information flow to achieve some specific business objective. Description are identified and created for CRUD of data objects.
Application generation: Automated tools are used to convert process models into code and the actual system.
Testing and turnover: Test new components and all the interfaces.
Advantages of the RAD model:
- Reduced development time.
- Increases reusability of components
- Quick initial reviews occur
- Encourages customer feedback
- Integration from very beginning solves a lot of integration issues.
Disadvantages of RAD model:
- Depends on strong team and individual performances for identifying business requirements.
- Only system that can be modularized can be built using RAD
- Requires highly skilled developers/designers.
- High dependency on modeling skills
- Inapplicable to cheaper projects as cost of modeling and automated codegeneration is very high.
Defination Waterfall Model:
The Waterfall Model was first Process Model to be introduced. It is also referred to as a linear-sequential life cycle model. It is very simple to understand and use. In a waterfall model, each phase must be completed fully before the next phase can begin. This type of model is basically used for the for the project which is small and there are no uncertain requirements.At the end of each phase, a review takes place to determine if the project is on the right path and whether or not to continue or discard the project. In this model the testing starts only after the development is complete. In waterfall model phases do not overlap.
Diagram of Waterfall-model:
Advantages of waterfall model:
- This model is simple and easy to understand and use.
- It is easy to manage due to the rigidity of the model – each phase has specific deliverables and a review process.
- In this model phases are processed and completed one at a time. Phases do not overlap.
- Waterfall model works well for smaller projects where requirements are very well understood.
Disadvantages of waterfall model:
- Once an application is in the testing stage, it is very difficult to go back and change something that was not well-thought out in the concept stage.
- No working software is produced until late during the life cycle.
- High amounts of risk and uncertainty.
- Not a good model for complex and object-oriented projects.
- Poor model for long and ongoing projects.
- Not suitable for the projects where requirements are at a moderate to high risk of changing.
What Is Fact Finding And Its Methods ?
Defination :
Fact finding is process of collection of data and information based on techniques which contain sampling of existing documents, research, observation, questionnaires, interviews, prototyping and joint requirements planning. System analyst uses suitable fact-finding techniques to develop and implement the current existing system. Collecting required facts are very important to apply tools in System Development Life Cycle because tools cannot be used efficiently and effectively without proper extracting from facts. Fact-finding techniques are used in the early stage of System Development Life Cycle including system analysis phase, design and post implementation review. Facts included in any information system can be tested based on three steps: data- facts used to create useful information, process- functions to perform the objectives and interface- designs to interact with users.
There are seven common fact-finding techniques :
Sampling of existing documentation, forms and databases
Research and Site visits
Observation of the work environment
Questionnaires
Interviews
Prototyping
Joint requirements planning
Interview
This method is used to collect the information from groups or individuals. Analyst selects the people who are related with the system for the interview. In this method the analyst sits face to face with the people and records their responses. The interviewer must plan in advance the type of questions he/ she is going to ask and should be ready to answer any type of question. He should also choose a suitable place and time which will be comfortable for the respondent.
The information collected is quite accurate and reliable as the interviewer can clear and cross check the doubts there itself. This method also helps gap the areas of misunderstandings and help to discuss about the future problems. Structured and unstructured are the two sub categories of Interview. Structured interview is more formal interview where fixed questions are asked and specific information is collected whereas unstructured interview is more or less like a casual conversation where in-depth areas topics are covered and other information apart from the topic may also be obtained.
Questionnaire
It is the technique used to extract information from number of people. This method can be adopted and used only by an skillful analyst. The Questionnaire consists of series of questions framed together in logical manner. The questions are simple, clear and to the point. This method is very useful for attaining information from people who are concerned with the usage of the system and who are living in different countries. The questionnaire can be mailed or send to people by post. This is the cheapest source of fact finding.
Record View
The information related to the system is published in the sources like newspapers, magazines, journals, documents etc. This record review helps the analyst to get valuable information about the system and the organization.
Observation
Unlike the other fact finding techniques, in this method the analyst himself visits the organization and observes and understand the flow of documents, working of the existing system, the users of the system etc. For this method to be adopted it takes an analyst to perform this job as he knows which points should be noticed and highlighted. In analyst may observe the unwanted things as well and simply cause delay in the development of the new system.
Sunday, 8 November 2015
How to Install Laravel 4 with an Apache Web Server on Ubuntu 14.04
Laravel is a open source PHP framework for web developers. It aims to provide an easy, elegant way for developers to get a fully functional web application running quickly.
Here we will dissuss how setup laravel step by step from scratch.
Preparing the server / environment
sudo apt-get update sudo apt-get upgrade
Preparation of installing PHP 5
Installing Apache, PHP and MySQL
Installing Apache, PHP and MySQL
sudo apt-get install apache2 sudo apt-get install php5 sudo apt-get install mysql-server sudo apt-get install php5-mysql
The installation process is self-explaining. You can prove the installed versions of PHP and Apache with:
php -v apache2 -v
Installing necessary PHP extensions
sudo apt-get install unzip sudo apt-get install curl sudo apt-get install openssl sudo apt-get install php5-mcrypt
Install Composer (systemwide)
curl -sS https://getcomposer.org/installer | php sudo mv composer.phar /usr/local/bin/composer Activate mod_rewrite sudo a2enmod rewrite sudo service apache2 restart
Install Laravel 4
cd /var/www/laraveldev wget https://github.com/laravel/laravel/archive/master.zip unzip master.zip && cd laravel-master/ && mv * ../ && cd .. rm -r laravel-master && rm master.zip Make the storage folder writeable and restart the server: sudo chmod -R 777 app/storage sudo service apache2 restart
First run of Laravel 4
Now edit app/routes.php and add a new route:
Now edit app/routes.php and add a new route:
Route::get(‘/mytest’, function() { return “Oh yeah, this really works !”; });
Now you navigate to your the browser.
http://your-domain-or-ip/mytest
Drupal caching using cache_get and cache_set
Drupal functions cache_get and cache_set are basically used for static caching temporarly and permanently in the cache table of the drupal database.
Using cache_set function is fairly simple below is a syntax of how you can implement this.
cache_set(key, value, 'cache', CACHE_TEMPORARY); or cache_set(key, value, 'cache', CACHE_PERMANENT);
cache_set takes four parameters first the key parameter, this parameter is unique for the system and is used to differentiate between other records in the table also this key is used to retrive data from the database the second parameter is the value that is the data to be stored in the key.
Now the third parameter is the table name that is table to store the data in and the fourth parameter is the cache type that is temporary or permanent, if temporary it will flush the cache when the cron runs and if permanent cache type is selected the cache will never expire.
cache_get function is used to retrive the data from the cache table based on the key in the table. Below is the syntax of how you can implement this.
cache_get(key);
The cache_get function when invoked by a key returns an object which contains the cache id as cid the value as the data variable and the expire that is the time it will expire. If expire is set to zero the cache will never expire. Below is a full syntax of how this is implemented.
$tempdata = cache_get('cachekey'); if(isset($tempdata->data)) { return $tempdata->data; } else { $value = "Test Cache Value"; cache_set('cachekey', $value, 'cache', CACHE_TEMPORARY); return $value; }
Subscribe to:
Posts (Atom)