Security Access Control
Security Access control :- It refers to exerting control
over who can interact with a resource. Often but not always, this involves an
authority, who does the controlling. The resource can be a given building,
group of buildings, or computer-based information system. But it can also refer
to a restroom stall where access is controlled by using a coin to open the
door.
Hardware based or assisted computer security offers an alternative to software-only computer security. Security tokens such as those using PKCS#11 may be more secure due to the physical access required in order to be compromised. Access is enabled only when the token is connected and correct PIN is entered (see two factor authentication). However, dongles can be used by anyone who can gain physical access to it. Newer technologies in hardware based security solve this problem offering fool proof security for data.
Working of Hardware based security: A hardware device allows a user to login, logout and to set different privilege levels by doing manual actions. The device uses biometric technology to prevent malicious users from logging in, logging out, and changing privilege levels. The current state of a user of the device is read by controllers in peripheral devices such as hard disks. Illegal access by a malicious user or a malicious program is interrupted based on the current state of a user by hard disk and DVD controllers making illegal access to data impossible. Hardware based access control is more secure than protection provided by the operating systems as operating systems are vulnerable to malicious attacks by viruses and hackers. The data on hard disks can be corrupted after a malicious access is obtained. With hardware based protection, software cannot manipulate the user privilege levels; it is impossible for a hacker or a malicious program to gain access to secure data protected by hardware or performs unauthorized privileged operations. The hardware protects the operating system image and file system privileges from being tampered. Therefore, a completely secure system can be created using a combination of hardware based security and secure system administration policies.
·
|
Access control is, in reality, an
everyday phenomenon. A lock on a car
door is essentially a form of access control. A PIN on an ATM System at a bank
is another means of access control. The possession of access control is of
prime importance when persons seek to secure important, confidential, or
sensitive information and equipment.
·
Access
control system operation When
a credential is presented to a reader, the reader sends the credential’s
information, usually a number, to a control panel, a highly reliable processor.
The control panel compares the credential's number to an access control list,
grants or denies the presented request, and sends a transaction log to a
database. When access is denied based on the access control list, the door
remains locked. If there is a match between the credential and the access
control list, the control panel operates a relay that in turn unlocks the door.
The control panel also ignores a door open signal to prevent an alarm. Often
the reader provides feedback, such as a flashing red LED for an access denied
and a flashing green LED for an access granted.
The above description illustrates a single
factor transaction. Credentials can be passed around, thus subverting the
access control list. For example, Alice has access rights to the server room but Bob does not. Alice either gives
Bob her credential or Bob takes it; he now has access to the server room. To
prevent this, two-factor authentication
can be used. In a two factor transaction, the presented credential and a second
factor are needed for access to be granted; another factor can be a PIN, a
second credential, operator intervention, or a biometric input.
There
are three types (factors) of authenticating information:[1]
- something the user knows, e.g. a password, pass-phrase or PIN
- something the user has, such as smart card
- something the user is, such as fingerprint, verified by biometric measurement
Passwords
are a common means of verifying a user's identity before access is given to
information systems. In addition, a fourth factor of authentication is now recognized:
someone you know, where another person who knows you can provide a human
element of authentication in situations where systems have been set up to allow
for such scenarios. For example, a user may have their password, but have
forgotten their smart card. In such a scenario, if the user is known to
designated cohorts, the cohorts may provide their smart card and password in
combination with the extant factor of the user in question and thus provide two
factors for the user with missing credential, and three factors overall to
allow access
·
Credential
A
credential is a physical/tangible object, a piece of knowledge, or a facet of a
person's physical being, that enables an individual access to a given physical
facility or computer-based information system. Typically, credentials can be
something you know (such as number or PIN), something you have (such as an access badge), something you are (such as a
biometric feature) or some combination of these items. The typical credential
is an access card, key fob, or other key. There are many card technologies
including magnetic stripe, bar code, Wiegand, 125 kHz proximity, 26 bit
card-swipe, contact smart car ds, and contactless
smart cards. Also available are key-fobs which are more compact than
ID cards and attach to a key ring. Typical biometric technologies include
fingerprint, facial recognition, iris recognition, retinal scan, voice, and
hand geometry]
· Access control system components
An
access control point, which can be a door,
turnstile, parking gate, elevator, or other physical barrier where granting
access can be electronically controlled. Typically the access point is a door.
An electronic access control door can contain several elements. At its most
basic there is a stand-alone electric lock. The lock is unlocked by an operator
with a switch. To automate this, operator intervention is replaced by a reader.
The reader could be a keypad where a code is entered, it could be a card reader, or it could be a biometric reader.
Readers do not usually make an access decision but send a card number to an
access control panel that verifies the number against an access list. To
monitor the door position a magnetic door switch is used. In concept the door
switch is not unlike those on refrigerators or car doors. Generally only entry
is controlled and exit is uncontrolled. In cases where exit is also controlled
a second reader is used on the opposite side of the door. In cases where exit
is not controlled, free exit, a device called a request-to-exit (RTE) is used.
Request-to-exit devices can be a push-button or a motion detector. When the
button is pushed or the motion detector detects motion at the door, the door
alarm is temporarily ignored while the door is opened. Exiting a door without
having to electrically unlock the door is called mechanical free egress. This
is an important safety.
System
Security
System security
is a branch of computer technology known as information
security as applied to computers
and networks. The objective of computer security includes protection of
information and property from theft, corruption, or natural disaster, while
allowing the information and property to remain accessible and productive to
its intended users. The term computer system security means the collective
processes and mechanisms by which sensitive and valuable information and
services are protected from publication, tampering or collapse by unauthorized
activities or untrustworthy individuals and unplanned events respectively. The
strategies and methodologies of computer security often differ from most other
computer technologies because of its somewhat elusive objective of preventing
unwanted computer behavior instead of enabling wanted computer behavior.
The
technologies of computer security are based on logic. As security is not necessarily the primary
goal of most computer applications, designing a program with security in mind
often imposes restrictions on that program's behavior.
1.
Trust
all the software to abide by a security policy but the software is not
trustworthy (this is computer
insecurity).
2.
Trust
all the software to abide by a security policy and the software is validated as
trustworthy (by tedious branch and path analysis for example).
3.
Trust
no software but enforce a security policy with mechanisms that are not trustworthy (again this
is computer
insecurity).
4.
Trust
no software but enforce a security policy with trustworthy hardware mechanisms.
Many
systems have unintentionally resulted in the first possibility. Since approach
two is expensive and non-deterministic, its use is very limited. Approaches one
and three lead to failure. Because approach number four is often based on
hardware mechanisms and avoids abstractions and a multiplicity of degrees of
freedom, it is more practical. Combinations of approaches two and four are
often used in a layered architecture with thin layers of two and thick layers
of four.
There
are various strategies and techniques used to design security systems. However,
there are few, if any, effective strategies to enhance security after design.
One technique enforces the principle
of least privilege
to great extent, where an entity has only the privileges that are needed for
its function. That way even if an attacker gains access to one part of the
system, fine-grained security ensures that it is just as difficult for them to
access the rest.
Furthermore,
by breaking the system up into smaller components, the complexity of individual
components is reduced, opening up the possibility of using techniques such as automated
theorem proving
to prove the correctness of crucial software subsystems. This enables a closed
form solution
to security that works well when only a single well-characterized property can
be isolated as critical, and that property is also assessable to math. Not
surprisingly, it is impractical for generalized correctness, which probably
cannot even be defined, much less proven. Where formal correctness proofs are
not possible, rigorous use of code review and unit testing represent a best-effort approach to
make modules secure.
The
design should use "defense
in depth", where
more than one subsystem needs to be violated to compromise the integrity of the
system and the information it holds. Defense in depth works when the breaching
of one security measure does not provide a platform to facilitate subverting
another. Also, the cascading principle acknowledges that several low hurdles
does not make a high hurdle. So cascading several weak mechanisms does not
provide the safety of a single stronger mechanism.
Subsystems
should default to secure settings, and wherever possible should be designed to
"fail secure" rather than "fail insecure" (see fail-safe for the equivalent in safety
engineering).
Ideally, a secure system should require a deliberate, conscious, knowledgeable
and free decision on the part of legitimate authorities in order to make it
insecure.
In
addition, security should not be an all or nothing issue. The designers and
operators of systems should assume that security breaches are inevitable. Full audit trails should be kept of system activity, so
that when a security breach occurs, the mechanism and extent of the breach can
be determined. Storing audit trails remotely, where they can only be appended
to, can keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are
found the "window
of vulnerability"
is kept as short as possible.
Application Security
Application
security encompasses
measures taken throughout the application's life-cycle to prevent exceptions in
the security
policy of an application or the underlying system (vulnerabilities) through flaws in the design, development, deployment, upgrade, or maintenance of the application.
Applications only
control the use of resources granted to them, and not which
resources are granted to them. They, in turn, determine the use of these resources
by users of the application through application security.
Open Web
Application Security Project (OWASP) and Web Application Security Consortium (WASC) updates on the latest threats which impair
web based applications. This aids developers, security testers and architects
to focus on better design and mitigation strategy. OWASP Top 10 has become an
industrial norm in assessing Web Applications.
Security testing
techniques scour for vulnerabilities or security holes in applications. These
vulnerabilities leave applications open to exploitation. Ideally, security testing is
implemented throughout the entire software development life cycle (SDLC) so that vulnerabilities may be
addressed in a timely and thorough manner. Unfortunately, testing is often
conducted as an afterthought at the end of the development cycle.
Vulnerability
scanners, and more
specifically web application scanners, otherwise known as penetration testing
tools (i.e. ethical hacking tools) have been historically used by security
organizations within corporations and security consultants to automate the
security testing of http request/responses; however, this is not a substitute
for the need for actual source code review. Physical code reviews of an
application's source code can be accomplished manually or in an automated
fashion. Given the common size of individual programs (often 500,000 lines of
code or more), the human brain can not execute a comprehensive data flow
analysis needed in order to completely check all circuitous paths of an
application program to find vulnerability points. The human brain is suited
more for filtering, interrupting and reporting the outputs of automated source
code analysis tools available commercially versus trying to trace every
possible path through a compiled code base to find the root cause level
vulnerabilities.
The two types of
automated tools associated with application vulnerability detection
(application vulnerability scanners) are Penetration
Testing Tools (often
categorized as Black
Box Testing
Tools) and static
code analysis
tools (often categorized as White Box Testing Tools). Tools in the Black Box
Testing arena include IBM
Rational AppScan,
HP Application Security Center[4] suite of applications (through the
acquisition of SPI Dynamics[5]), Nikto (open source). Tools in the static code
analysis arena include Coverity,[6] GrammaTech,[7] Klocwork,[8] Parasoft,[9] Pre-Emptive Solutions,[10] and Veracode.[11]
Banking and large
E-Commerce corporations have been the very early
adopter customer profile for these types of tools. It is commonly held within
these firms that both Black Box testing and White Box testing tools are needed
in the pursuit of application security. Typically sited, Black Box testing
(meaning Penetration Testing tools) are ethical hacking tools used to attack
the application surface to expose vulnerabilities suspended within the source
code hierarchy. Penetration testing tools are executed on the already deployed
application. White Box testing (meaning Source Code Analysis tools) are used by
either the application security groups or application development groups.
Typically introduced into a company through the application security
organization, the White Box tools complement the Black Box testing tools in
that they give specific visibility into the specific root vulnerabilities
within the source code in advance of the source code being deployed.
Vulnerabilities identified with White Box testing and Black Box testing are
typically in accordance with the OWASP taxonomy for software coding errors. White
Box testing vendors have recently introduced dynamic versions of their source
code analysis methods; which operates on deployed applications. Given that the
White Box testing tools have dynamic versions similar to the Black Box testing
tools, both tools can be correlated in the same software error detection
paradigm ensuring full application protection to the client company.
The advances in
professional Malware targeted at the Internet customers of
online organizations has seen a change in Web application design requirements
since 2007. It is generally assumed that a sizable percentage of Internet users
will be compromised through malware and that any data coming from their
infected host may be tainted. Therefore application security has begun to
manifest more advanced anti-fraud and heuristic detection systems in the
back-office, rather than within the client-side or Web server code
Network
Security
Network security[1]
consists of the provisions and policies
adopted by a network administrator
to prevent and monitor unauthorized
access, misuse, modification, or denial of a computer
network and network-accessible resources. Network
security involves the authorization of access to data in a network, which is
controlled by the network administrator. Users choose or are assigned an ID and
password or other authenticating information that allows them access to
information and programs within their authority. Network security covers a
variety of computer networks, both public and private, that are used in
everyday jobs conducting transactions and communications among businesses,
government agencies and individuals. Networks can be private, such as within a
company, and others which might be open to public access. Network security is
involved in organizations, enterprises, and other types of institutions. It
does as its title explains: It secures the network, as well as protecting and
overseeing operations being done. The most common and simple way of protecting
a network resource is by assigning it a unique name and a corresponding
password.
Network security
starts with authenticating the user, commonly with a username
and a password. Since this requires just one detail authenticating the user
name —i.e. the password, which is something the user 'knows'— this is sometimes
termed one-factor authentication. With two-factor
authentication,
something the user 'has' is also used (e.g. a security token or 'dongle', an ATM card, or a mobile phone); and with three-factor
authentication, something the user 'is' is also used (e.g. a fingerprint or retinal scan).
Once
authenticated, a firewall enforces access policies such as what
services are allowed to be accessed by the network users.[2] Though effective to prevent
unauthorized access, this component may fail to check potentially harmful
content such as computer
worms or Trojans being transmitted over the network. Anti-virus
software or an intrusion
prevention system
(IPS)[3]
help detect and inhibit the action of such malware. An anomaly-based intrusion detection system may also monitor the network and traffic for unexpected (i.e. suspicious)
content or behavior and other anomalies to protect resources, e.g. from denial of service attacks or an employee accessing
files at strange times. Individual events occurring on the network may be
logged for audit purposes and for later high-level analysis.
Communication
between two hosts using a network may be encrypted to maintain privacy.
Honeypots, essentially decoy network-accessible resources, may be deployed
in a network as surveillance and early-warning tools, as the honeypots are not
normally accessed for legitimate purposes. Techniques used by the attackers
that attempt to compromise these decoy resources are studied during and after
an attack to keep an eye on new exploitation techniques. Such analysis may be used
to further tighten security of the actual network being protected by the
honeypot.
Data
Security
Software
based security solutions encrypt the data to prevent data from being stolen.
However, a malicious program or a hacker may corrupt the data in order to make
it unrecoverable or unusable. Similarly, encrypted operating systems can be
corrupted by a malicious program or a hacker, making the system unusable.
Hardware-based security solutions can prevent read and write access to data and
hence offers very strong protection against tampering and unauthorized access.Hardware based or assisted computer security offers an alternative to software-only computer security. Security tokens such as those using PKCS#11 may be more secure due to the physical access required in order to be compromised. Access is enabled only when the token is connected and correct PIN is entered (see two factor authentication). However, dongles can be used by anyone who can gain physical access to it. Newer technologies in hardware based security solve this problem offering fool proof security for data.
Working of Hardware based security: A hardware device allows a user to login, logout and to set different privilege levels by doing manual actions. The device uses biometric technology to prevent malicious users from logging in, logging out, and changing privilege levels. The current state of a user of the device is read by controllers in peripheral devices such as hard disks. Illegal access by a malicious user or a malicious program is interrupted based on the current state of a user by hard disk and DVD controllers making illegal access to data impossible. Hardware based access control is more secure than protection provided by the operating systems as operating systems are vulnerable to malicious attacks by viruses and hackers. The data on hard disks can be corrupted after a malicious access is obtained. With hardware based protection, software cannot manipulate the user privilege levels; it is impossible for a hacker or a malicious program to gain access to secure data protected by hardware or performs unauthorized privileged operations. The hardware protects the operating system image and file system privileges from being tampered. Therefore, a completely secure system can be created using a combination of hardware based security and secure system administration policies.
Types
of Attack
· DDOS/DOS
Attack:-
In
computing, a denial-of-service attack (DoS
attack) or distributed denial-of-service attack (DDoS attack)
is an attempt to make a machine or network resource unavailable to its intended
users. Although the means to carry out,
motives for, and targets of a DoS attack may vary, it generally consists of the
efforts of one or more people to temporarily or indefinitely interrupt or
suspend services of a host connected to the Internet.
Perpetrators
of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers. The term is generally used relating
to computer
networks, but is not limited
to this field; for example, it is also used in reference to CPU resource management.[1]
One
common method of attack involves saturating the target machine with external
communications requests, such that it cannot respond to legitimate traffic, or
responds so slowly as to be rendered effectively unavailable. Such attacks
usually lead to a server overload. In general terms, DoS attacks are
implemented by either forcing the targeted computer(s) to reset, or consuming
its resources so that it can no longer provide its
intended service or obstructing the communication media between the intended
users and the victim so that they can no longer communicate adequately.
Denial-of-service
attacks are considered violations of the IAB's Internet proper use policy, and also violate the acceptable
use policies
of virtually all Internet
service providers.
They also commonly constitute violations of the laws of individual nations.
When
the DoS Attacker sends many packets of information and requests to a single
network adapter, each computer in the network would experience effects from the
DoS attack.
· Backdoor :-
The threat of backdoors surfaced when multiuser and networked
operating systems became widely adopted. Petersen and Turn discussed computer
subversion in a paper published in the proceedings of the 1967 AFIPS Conference.They
noted a class of active infiltration attacks that use "trapdoor"
entry points into the system to bypass security facilities and permit direct
access to data. The use of the word trapdoor
here clearly coincides with more recent definitions of a backdoor. However,
since the advent of public key cryptography the term trapdoor has
acquired a different meaning. More generally, such security
breaches were discussed at length in a RAND
Corporation task force report published under ARPA
sponsorship by J.P. Anderson and D.J. Edwards in 1970.
A
backdoor in a login system might take the form of a hard coded user and password combination which
gives access to the system. A famous example of this sort of backdoor was used
as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a
hardcoded password (his dead son's name) which gave the user access to the
system, and to undocumented parts of the system (in particular, a video
game–like simulation mode and direct interaction with the artificial
intelligence).
An
attempt to plant a backdoor in the Linux kernel, exposed in November 2003, showed how subtle such a code change can be.[3] In this case, a two-line change appeared
to be a typographical error, but actually gave the caller to the sys_wait4
function root access to the system.
Although
the number of backdoors in systems using proprietary
software (software whose source code is not publicly available) is not
widely credited, they are nevertheless frequently exposed. Programmers have
even succeeded in secretly installing large amounts of benign code as Easter
eggs in programs,
although such cases may involve official forbearance, if not actual permission.
· Spoofing:-
In the context of network
security, a spoofing attack is a situation in
which one person or program successfully masquerades as another by falsifying
data and thereby gaining an illegitimate advantage.
Spoofing and TCP/IP
Many of the protocols in the TCP/IP suite do not
provide mechanisms for authenticating the source or destination of a message.
They are thus vulnerable to spoofing attacks when extra precautions are not
taken by applications to verify the identity of the sending or receiving host. IP
spoofing and ARP spoofing in
particular may be used to leverage man-in-the-middle attacks
against hosts on a computer
network. Spoofing attacks which take advantage of TCP/IP suite protocols
may be mitigated with the use of firewalls capable of deep packet inspection
or by taking measures to verify the identity of the sender or recipient of a
message.
Referrer spoofing
Some websites, especially pornographic paysites,
allow access to their materials only from certain approved (login-) pages. This
is enforced by checking the referrer
header of the HTTP
request. This referrer header however can be changed (known as "referrer
spoofing" or "Ref-tar spoofing"), allowing users to gain
unauthorized access to the materials.
Poisoning of file-sharing networks
"Spoofing" can also refer to copyright holders placing
distorted or unlistenable versions of works on file-sharing networks, to
discourage downloading from these sources.
Caller ID spoofing
Main
article: Caller ID spoofing
In public telephone networks, it has for a long while been
possible to find out who is calling you by looking at the Caller ID information
that is transmitted with the call. There are technologies that transmit this
information on landlines, on cellphones and also with VoIP. Unfortunately,
there are now technologies (especially associated with VoIP) that allow callers
to lie about their identity, and present false names and numbers, which could
of course be used as a tool to defraud or harass. Because there are services
and gateways that interconnect VoIP with other public phone networks, these
false Caller IDs can be transmitted to any phone on the planet, which makes the
whole Caller ID information now next to useless. Due to the distributed
geographic nature of the Internet, VoIP calls can be generated in a different
country to the receiver, which means that it is very difficult to have a legal
framework to control those who would use fake Caller IDs as part of a scam.
Voice Mail Spoofing and How to Protect Yourself From Unauthorized Access
Main
article: Voice Mail spoofing
Spoofing technology enables someone to make it seem as though they are calling from your telephone when they
are not. The use of this technology for deceptive purposes is illegal.
In order to prevent unauthorized voicemail access from fraudulent
activity such as caller ID spoofing, you should continue to use the voicemail
passcode established when you set up your account. If you decide to skip using
the voicemail passcode established when you set up your account, your voice
mail messages can be vulnerable to unauthorized access with spoofing.
In most cases, you can change a voicemail passcode or adjust
settings to re-enable the use of a passcode for retrieving messages, just
access your voicemail and follow the prompts.
This information was found within the self-service feature of
Sprint Zone in user's cell phone when selecting the option, Device Tips and
Tricks, then, Voice Mail & Device Security.
E-mail address spoofing
Main
article: E-mail spoofing
The sender information shown in e-mails
(the "From" field) can be spoofed easily. This technique is commonly
used by spammers to hide the
origin of their e-mails and leads to problems such as misdirected bounces
(i.e. e-mail spam backscatter).
E-mail address spoofing is done in quite the same way as writing a
forged return address using snail
mail. As long as the letter fits the protocol, (i.e. stamp, postal
code) the SMTP
protocol will send the message. It can be done using a mail server with telnet.[2]
GPS Spoofing
A GPS spoofing attack attempts to deceive a GPS receiver by
broadcasting a slightly more powerful signal than that received from the GPS
satellites, structured to resemble a set of normal GPS signals. These spoofed
signals, however, are modified in such a way as to cause the receiver to
determine its position to be somewhere other than where it actually is,
specifically somewhere determined by the attacker. Because GPS systems work by
measuring the time it takes for a signal to travel from the satellite to the
receiver, a successful spoofing requires that the attacker know precisely where
the target is so that the spoofed signal can be structured with the proper
signal delays. A GPS spoofing attack begins by broadcasting a slightly more
powerful signal that produces the correct position, and then slowly deviates
away towards the position desired by the spoofer, because moving too quickly
will cause the receiver to lose signal lock altogether, at which point the
spoofer works only as a jammer. It has been suggested that the capture of a
Lockheed RQ-170 drone aircraft in northeastern Iran in December, 2011, was the
result of such an attack.GPS spoofing attacks had been predicted and discussed
in the GPS community previously, but no known example of a malicious spoofing
attack has yet been confirmed.\
· Replay:-
A replay attack is a form of network
attack in which a valid data transmission is maliciously or fraudulently
repeated or delayed. This is carried out either by the originator or by an adversary
who intercepts the data and retransmits it, possibly as part of a masquerade
attack by IP
packet
substitution (such as stream cipher attack).
Suppose Alice
wants to prove her identity to Bob. Bob requests her password as proof of
identity, which Alice dutifully provides (possibly after some transformation
like a hash function);
meanwhile, Mallory is eavesdropping on the conversation and keeps the password
(or the hash). After the interchange is over, Mallory (posing as Alice)
connects to Bob; when asked for a proof of identity, Mallory sends Alice's
password (or hash) read from the last session, which Bob accepts.
· TCP Hijacking :-
In
computer science session hijacking
is the exploitation of a valid computer session—sometimes
also called a session key—to gain unauthorized access to information or
services in a computer system. In particular, it is used to refer to the theft
of a magic
cookie used to
authenticate a user to a remote server. It has particular relevance to web developers, as the HTTP cookies used to maintain a session on many
web sites can be easily stolen by an attacker using an intermediary computer or
with access to the saved cookies on the victim's computer (see HTTP cookie theft).
A
popular method is using source-routed IP packets. This allows a hacker at
point A on the network to participate in a conversation between B and C by
encouraging the IP packets to pass through its machine.
If
source-routing is turned off, the hacker can use "blind" hijacking,
whereby it guesses the responses of the two machines. Thus, the hacker can send
a command, but can never see the response. However, a common command would be
to set a password allowing access from somewhere else on the net.
A
hacker can also be "inline" between B and C using a sniffing program
to watch the conversation. This is known as a "man-in-the-middle
attack".
A
common component of such an attack is to execute a denial-of-service (DoS) attack against one end-point to
stop it from responding. This attack can be either against the machine to force
it to crash, or against the network connection to force heavy packet loss.
· Social
Engineering:-
Social engineering, in the context of security, is understood to mean the art of manipulating people into performing actions or divulging confidential information. While
it is similar to a confidence trick or simple fraud, it is typically trickery or deception for the purpose of information
gathering, fraud, or computer system access; in most cases the attacker never comes face-to-face
with the victims.
"Social
engineering" as an act of psychological manipulation had previously been
associated with the social sciences, but its usage has caught on among computer
professionals.
·
Dumpster Diving:-
Dumpster diving is the colloquial name for going through
somebody's garbage -- which will usually be in dumpsters for large
organizations. This is a powerful tactic because it is protected by social
taboos. Trash is bad, and once it goes into the trash, something is best
forgotten. The reality is that most company trash is fairly clean, and provides
a gold mine of information.
Phone lists
Helps map out the power structure of
the company, and gives possible account names, and is essential in appearing as
a member of the organization.
Memos
Reveal activities inside the target
organization.
Policy manuals
Today's employee manuals give
instructions on how not to be victimized by hackers, and likewise help the
hacker know which attacks to avoid, or at least try in a different manner than
specified in the policy manual.
Calenders of events
Tells the hackers when everyone will
be elsewhere and not logged into the system. Best time to break in.
System manuals, packing crates
Tells the hackers about new systems
that they can break into.
Print outs
Source code is frequently found in
dumpsters, along with e-mails (revealing account names), and PostIt&tm;
notes containing written passwords.
Disks, tapes, CD-ROMs
People forget to erase storage media,
leaving sensitive data exposed. These days, dumpsters may contain larger number
of "broken" CD-Rs. The CD-ROM "burning" process is
sensitive, and can lead to failures, which are simply thrown away. However,
some drives can still read these disks, allowing the hacker to read a half-way
completed backup or other sensitive piece of information.
Old hard drives
Like CD-ROMs, information from broken
drives can usually be recovered. It depends only upon the hacker's determination.
·
Password Guessing:-
In cryptanalysis
and computer security,
Password cracking is the process
of recovering passwords from data that has been stored in or
transmitted by a computer system.
A common approach is to repeatedly try guesses for the password. Another common
approach is to say that you have "forgotten" the password and then
change it.
The purpose of password cracking might be to help a user recover a
forgotten password (though installing an entirely new password is less of a
security risk, but involves system administration privileges), to gain
unauthorized access to a system, or as a preventive measure by system
administrators to check for easily crackable passwords. On a file-by-file
basis, password cracking is utilized to gain access to digital evidence for
which a judge has allowed access but the particular file's access is
restricted.
·
Brute Force :-
In cryptography, a brute-force
attack, or exhaustive key search, is a strategy
that can, in theory, be used against any encrypted data[1][page needed]
(except for data encrypted in an information-theoretically secure
manner). Such an attack might be utilized when it is not possible to take
advantage of other weaknesses in an encryption system (if any exist) that would
make the task easier. It involves systematically checking all possible keys until the correct key is found.
In the worst case, this would involve traversing the entire search space.
The key length used in the
encryption determines the practical feasibility of performing a brute-force
attack, with longer keys exponentially more difficult to crack than shorter
ones. Brute-force attacks can be made less effective by obfuscating
the data to be encoded, something that makes it more difficult for an attacker
to recognise when he/she has cracked the code. One of the measures of the
strength of an encryption system is how long it would theoretically take an
attacker to mount a successful brute-force attack against it.
Brute-force attacks are an application of brute-force search,
the general problem-solving technique of enumerating all candidates and
checking each one.
· Software Exploitation :-
An exploit (from the verb to exploit, in the meaning of
using something to one’s own advantage) is a piece of software,
a chunk of data, or sequence of commands that takes advantage of a bug,
glitch
or vulnerability
in order to cause unintended or unanticipated behaviour to occur on computer
software, hardware, or something electronic (usually computerised). This
frequently includes such things as gaining control of a computer system or
allowing privilege escalation
or a denial-of-service attack.
· Trojen Horse:-
A Trojan horse, or Trojan, is a standalone malicious
file or program that does not attempt to inject itself into other files unlike
a computer virus
and often masquerades as a legitimate file or program. Trojan horses can make
copies of themselves, steal information, or harm their host computer systems.[1]
The first and many current Trojan horses attempt to appear as helpful programs.
Others rely on drive-by downloads
in order to reach target computers.
The term is derived from the Trojan
Horse story in Greek
mythology because Trojan horses employ a form of “social engineering,”
presenting themselves as harmless, useful gifts, in order to persuade victims
to install them on their computers (just as the Trojans were tricked into
taking the Trojan Horse inside their
gates).
SAC Process
SAC Process Complted in 5 Stages
a) SRR (Software Requirement Review)
b) SDR (Software Design Review)
c) CR (Code Review)
d) PT (Penetration Testing)
e) DR (Deployment Review in Review end)
·
SRR
(Software Requirement Review)
Career options for Software Test
Professionals7 basic tips for testing multi-lingual web sites →How to test
software requirements specification (SRS)?Posted In | Software Testing
Templates, Testing Life cycleDo you know “Most of the bugs in software are due
to incomplete or inaccurate functional requirements?” The software code, doesn’t matter how well
it’s written, can’t do anything if there are ambiguities in requirements.It’s
better to catch the requirement ambiguities and fix them in early development life
cycle. Cost of fixing the bug after completion of development or product
release is too high. So it’s important
to have requirement analysis and catch these incorrect requirements before
design specifications and project implementation phases of SDLC.How to measure
functional software requirement specification (SRS) documents?Well, we need to
define some standard tests to measure the requirements. Once each requirement
is passed through these tests you can evaluate and freeze the functional
requirements.Let’s take an example. You are working on a web based application.
Requirement is as follows:“Web application should be able to serve the user
queries as early as possible”How will you freeze the requirement in this
case?What will be your requirement satisfaction criteria? To get the answer,
ask this question to stakeholders: How much response time is ok for you?If they
say, we will accept the response if it’s within 2 seconds, then this is your
requirement measure. Freeze this requirement and carry the same procedure for
next requirement.We just learned how to measure the requirements and freeze
those in design, implementation and testing phases.Now let’s take other
example. I was working on a web based project. Client (stakeholders) specified
the project requirements for initial phase of the project development. My
manager circulated all the requirements in the team for review. When we started
discussion on these requirements, we were just shocked! Everyone was having his
or her own conception about the requirements. We found lot of ambiguities in
the ‘terms’ specified in requirement documents, which later on sent to client
for review/clarification.Client used many ambiguous terms, which were having
many different meanings, making it difficult to analyze the exact meaning. The
next version of the requirement doc from client was clear enough to freeze for
design phase.From this example we learned “Requirements should be clear and
consistent”Next criteria for testing the requirements specification is “Discover
missing requirements”Many times project designers don’t get clear idea about
specific modules and they simply assume some requirements while design phase.
Any requirement should not be based on assumptions. Requirements should be
complete, covering each and every aspect of the system under
development.Specifications should state both type of requirements i.e. what
system should do and what should not.Generally I use my own method to uncover
the unspecified requirements. When I read the software requirements
specification document (SRS), I note down my own understanding of the
requirements that are specified, plus other requirements SRS document should
supposed to cover. This helps me to ask the questions about unspecified
requirements making it clearer.For checking the requirements completeness,
divide requirements in three sections, ‘Must implement’ requirements,
requirements those are not specified but are ‘assumed’ and third type is
‘imagination’ type of requirements. Check if all type of requirements are
addressed before software design phase.Check if the requirements are related to
the project goal.Some times stakeholders have their own expertise, which they
expect to come in system under development. They don’t think if that
requirement is relevant to project in hand. Make sure to identify such
requirements. Try to avoid the irrelevant requirements in first phase of the
project development cycle. If not possible ask the questions to stakeholders:
why you want to implement this specific requirement? This will describe the
particular requirement in detail making it easier for designing the system
considering the future scope.But how to decide the requirements are relevant or
not?Simple answer: Set the project goal and ask this question: If not implementing
this requirement will cause any problem achieving our specified goal? If not,
then this is irrelevant requirement. Ask the stakeholders if they really want
to implement these types of requirements.In short requirements specification
(SRS) doc should address following:Project functionality (What should be done
and what should not)Software, Hardware interfaces and user interfaceSystem
Correctness, Security and performance criteriaImplementation issues (risks) if
anyConclusion:I have covered all aspects of requirement measurement. To be
specific about requirements, I will summarize requirement testing in one
sentence:“Requirements should be clear and specific with no uncertainty,
requirements should be measurable in terms of specific values, requirements should
be testable having some evaluation criteria for each requirement, and
requirements should be complete, without any contradictions”Testing should
start at requirement phase to avoid further requirement related bugs.
Communicate more and more with your stakeholder to clarify all the requirements
before starting project design and implementation.
· SDR (Software Design Review)
A design review
is a milestone within a product development process whereby a design is evaluated against its requirements in order to verify the outcomes of previous
activities and identify issues before committing to - and if need to be
re-prioritise - further work. The ultimate design review, if successful,
therefore triggers the product launch or product
release.
The conduct of
design reviews is compulsory as part of design controls, when
developing products in certain regulated contexts such as medical devices.
By definition, a
review must include persons who are external to the design team.
Autodesk has a file viewer and
commenting tool for the architecture, engineering and construction industries
called Design Review.
Contents of a design review
In order to
evaluate a design against its requirements, a number of means may be
considered, such as:
- Physical tests.
- Engineering simulations.
- Examinations (Walk-through).
Timing of design reviews
Most formalised systems engineering
processes recognise that the cost of
correcting a fault increases as it progresses through the development
process. Additional effort spent in
the early stages of development to discover and correct errors is therefore
likely to be worthwhile. Design reviews are an example of such an effort.
Therefore, a number of design reviews may be carried out, for example to
evaluate the design against different sets of criteria (consistency, usability, ease of localisation, environmental) or during various stages of the
design process.
· CR (Code Review)
Code review is systematic examination (often known as peer
review) of computer source code. It is intended to find and fix mistakes overlooked in the initial
development phase,
improving both the overall quality of software and the developers' skills. Reviews
are done in various forms such as pair programming, informal walkthroughs, and formal inspections.
Code reviews can
often find and remove common vulnerabilities such as format string exploits, race conditions, memory leaks and buffer overflows,
thereby improving software security. Online software repositories based on Subversion (with Redmine or Trac), Mercurial, Git or others allow
groups of individuals to collaboratively review code. Additionally, specific
tools for collaborative code review can facilitate the code review process.
Automated code reviewing software lessens the task of reviewing large chunks of code on
the developer by systematically checking source code for known
vulnerabilities. A recent study by VDC Research reports that 17.6% of the
embedded software engineers surveyed currently use automated tools for peer
code review and 23.7% expect to use them within 2 years.[2]
Capers Jones'
ongoing analysis of over 12,000 software development projects showed that the
latent defect discovery rate of formal inspection is in the 60-65% range. For
informal inspection, the figure is less than 50%.[citation needed] The
latent defect discovery rate for most forms of testing is about 30% [3].
Typical code review
rates are about 150 lines of code per hour. Inspecting and reviewing more than
a few hundred lines of code per hour for critical software (such as safety
critical embedded software) may be too fast to find errors [4] [5]. Industry
data indicates that code reviews can accomplish at most an 85% defect removal
rate with an average rate of about 65%. [6]
The types of
defects detected in code reviews have also been studied. Based on empirical
evidence it seems that up to 75% of code review defects affect software
evolvability rather than functionality making code reviews an excellent tool
for software companies with long product or system life cycles.
Types
Code review
practices fall into three main categories: pair programming, formal code review
and lightweight code review.[1]
Formal code review,
such as a Fagan inspection, involves a careful and detailed process with
multiple participants and multiple phases. Formal code reviews are the
traditional method of review, in which software developers attend a series of meetings and review code line by
line, usually using printed copies of the material. Formal inspections are
extremely thorough and have been proven effective at finding defects in the
code under review.[citation needed]
Lightweight code
review typically requires less overhead than formal code inspections, though it
can be equally effective when done properly.[citation needed]
Lightweight reviews are often conducted as part of the normal development
process:
- Over-the-shoulder – One developer looks over the author's shoulder as the latter walks through the code.
- Email pass-around – Source code management system emails code to reviewers automatically after checkin is made.
- Pair Programming – Two authors develop code together at the same workstation, such is common in Extreme Programming.
- Tool-assisted code review – Authors and reviewers use specialized tools designed for peer code review.
Some of these may
also be labeled a "Walkthrough" (informal) or "Critique"
(fast and informal).
Many teams that
eschew traditional, formal code review use one of the above forms of
lightweight review as part of their normal development process. A code review
case study published in the book Best Kept Secrets of Peer Code Review
found that lightweight reviews uncovered as many bugs as formal reviews, but
were faster and more cost-effective.
Criticism
Historically,
formal code reviews have required a considerable investment in preparation for
the review event and execution time.
Use of code
analysis tools can support this activity. Especially tools that work in the IDE
as they provide direct feedback to developers of coding standard compliance.
· PT (Penetration Testing)
A penetration
test, occasionally pentest, is a method of evaluating the security of a computer system or network by simulating
an attack from malicious outsiders (who do not have an authorized means of
accessing the organization's systems) and malicious insiders (who have some
level of authorized access). The process involves an active analysis of the
system for any potential vulnerabilities that could result from poor or
improper system configuration, both known and unknown hardware or software
flaws, or operational weaknesses in process or technical countermeasures. This
analysis is carried out from the position of a potential attacker and can
involve active exploitation of security vulnerabilities.
Security issues
uncovered through the penetration test are presented to the system's owner.
Effective penetration tests will couple this information with an accurate
assessment of the potential impacts to the organization and outline a range of
technical and procedural countermeasures to reduce risks.
Penetration tests
are valuable for several reasons:
1.
Determining the
feasibility of a particular set of attack vectors
2.
Identifying
higher-risk vulnerabilities that result from a combination of lower-risk
vulnerabilities exploited in a particular sequence
3.
Identifying
vulnerabilities that may be difficult or impossible to detect with automated
network or application vulnerability scanning software
4.
Assessing the
magnitude of potential business and operational impacts of successful attacks
5.
Testing the
ability of network defenders to successfully detect and respond to the attacks
6.
Providing
evidence to support increased investments in security personnel and technology
Penetration tests
are a component of a full security
audit.[1][2] For example,
the Payment
Card Industry Data Security Standard
(PCI DSS), and security and auditing standard, requires both annual and ongoing
penetration testing (after system changes).
· DR (Deployment Review in Review end)
This Rview Done in the last step of the all reviews it is see
as you can see from Special:Version, many people have written extensions
that then got deployed on Wikimedia sites. Most of those programmers didn't
work for the Wikimedia Foundation. In 2011 and 2012 the process had a
bottleneck as extensions sat awaiting code review for months at a time, but we're
hoping to speed that up by separating the "should WMF sites get this feature?"
evaluation step from the "does this code work and perform well?"
step, and by better integrating extensions review into WMF engineers' community service time.
No comments:
Post a Comment