• This articles spark the imagination and enlighten us in ways and how our computer/devices connect to the internet. A brief overview of DHCP and the interaction processes that makes our internet connection possible.

    Almost everybody uses Internet either in their homes and offices, and we get connected via an Internet modem, DSL, LAN or Wireless LAN connection.

    All you need to do is to open up your modem application interface and click on connect, and for cable users, you plug in your network cable to your computer. In a matter of seconds, you are connected to the Network and to the Internet.

    Have you ever wondered how that modem or cable connected to the service provider? Ever wondered about the series of process that went down within that short time before you could gain access to the network?
    I think you do!

    Well, DHCP makes all that possible.


    What is DHCP?
    DHCP is a network protocol used to configure network devices so they can communicate on an IP network. DHCP client uses DHCP protocol to acquire configuration information, such as an IP address, a default route, DNS server address and other needed configuration settings from the server.
    This IP addresses are released and renewed when the devices leaves and re-joins the network.


    Your ISP has a DHCP server. They can assign IPs by modem or computer MAC addresses. When your modem comes online, it communicates to the network indicating it is looking for an IP address. DHCP server listens to the communication and starts talking to the modem. At this point, the modem or computer transmits its MAC address to the DHCP server, and in return is assigned an IP address. With the IP address, a modem can now connect to the network and to the internet.

    ISPs usually use DHCP to allow customers to join the internet with minimum effort. Likewise, home network equipment like broadband routers (Wired/Wireless) offer DHCP support for added convenience in joining home computers/devices to the LAN.

    DHCP environments require a DHCP server set up with the appropriate configuration parameters for the given network. Devices running the DHCP client software can then automatically retrieve these settings from the server as required.
    Using DHCP on a network means System Administrators do not need to configure these parameters individually for each client device connecting to the network.

    The above explains how your modem, computer and devices connect to the network/internet.
    Now, let’s see the DHCP Client/server interaction when allocating a new network address.


    DHCP Client/Server Interaction
    DHCP configuration is accomplished through the following sequence of steps:

    1.       The DHCP client broadcasts a DHCPDISCOVER message on the local subnet.

    2.       All servers on the subnet receive the DHCPDISCOVER message1. If the servers have any IP addresses available, they broadcast a DHCPOFFER message. The use of "serial numbers" in the packets lets the client know that a certain DHCPOFFER corresponds to a certain DHCPDISCOVER.

    3.       The DHCP client receives all DHCPOFFER messages. Different servers may offer the client different network parameters. The client selects the best DHCPOFFER, and throws away the rest. The client then broadcasts a DHCPREQUEST message, filling in the "server identifier" field of the DHCPREQUEST with the IP address of the server whose DHCPOFFER it has chosen.

    4.       The servers all receive the DHCPREQUEST. They all look to see if their IP address is in the "server identifier" field of the message. If a server does not find its IP address there, it knows the client has rejected its DHCPOFFER. If the server does find its IP address there, it can proceed in one of two ways. If the IP address is still available, and everything is going well, the server broadcasts a DHCPACK to the client. If there is some sort of trouble, the server sends a DHCPNAK instead.

    5.       The client receives either a DHCPACK or a DHCPNAK from the server it selected. If the client receives a DHCPACK, then all is well, and it has now obtained an IP address and network parameters. If the client receives a DHCPNAK, it can either give up or it can restart the process by sending another DHCPDISCOVER. If, for some reason, the client receives a DHCPACK but is still not satisfied, it can broadcast a DHCPDECLINE to the server.



    DHCP is very interesting; imagine the stress we would be going through to connect to a local or wireless network without the DHCP service.





  •  

     

     

     

     

    A Review of Honeypots:Tracking Hackers by Lance Spitzner

    The Bee in the Honeypot


    I read recently Honeypots: Tracking Hackers by Lance Spitzner because I wanted to learn more about the technology behind these "hackable" computers. Very little technical information has ever been written on the subject. In fact, Lance is the first to complete an in-depth study of honeypots since Clifford Stoll's The Cuckoo's Egg in 1990. Overall, I was impressed with the detail of the book. Lance went to great lengths to make his readers aware of just what honeypots are. But I simply do not agree with the implementation of honeypots within a secured network.
    The basic concept is simple. First, you build a computer with the purpose of allowing an attacker to compromise it. Then, you throw in a bunch of interesting files to lure him in. Finally, connect it to the internet with the least amount of security possible and wait. When an attacker connects to this computer, his attempts to compromise it are logged. The information collected during the session is then used to pinpoint the hacker's location and possibly serve as evidence in a criminal trial against him.

    From this perspective a honeypot would seem like a formidable weapon in the battle against the elusive Blackhat hacker. However, Lance suggests inserting these systems directly into your internal network; placing them right beside the computers that you work so hard to keep secured. This is supposed to the give an attacker a more suitable target to compromise. The assumption is that the attacker will aim for the unsecured honeypot instead of the other, more sensitive computers within your network. The way I see it, assumptions are dangerous and working to build a secure network that's full of holes just don't add up.

    Lance devotes a considerable amount of time in the book toward the proper placement of these systems. Basically, he suggests placing one honeypot per every zone of your internal network. This is not a logical security implementation. Opening a security hole in every zone of your network raises a number of issues. The first of which involves the chances of an attacker even compromising the honeypot once he has entered a particular zone.

    For example, let's say that the DMZ zone of my home network consists of a file server, a web server, and a honeypot. The chances that an attacker will try to compromise the honeypot once he enters that zone are three to one. Not bad odds. Until you consider the level of security on the other two computers within that zone. The file and web servers are going to have as much security placed on them as possible, while the honeypot is left wide open to whatever threat comes its way. This is bound to raise suspicion within the hackers mind. Hackers are commonly perceived to be naïve script kiddies. In reality, they are meticulous in their art and are all too aware of the latest security defenses. The odds that a skilled hacker will just fall into any trap that has been placed in his path are slim to none. So, what's he going to do when he notices that unsecured honeypot sitting beside two highly secured servers? He's going to skip right over it and head straight for the goods.

    Of course the chances are still good that he will try to compromise the honeypot, even though he knows it's a trap. Why? Because he knows that if he gains the honeypot computer, the he can use it to reach every other computer within that zone, unrestricted. Think about it. For the honeypot to be active in a particular zone, it must be able to communicate with the other computers that reside within that zone. For example, computer B must be able to accept connection requests to and from computer A in order to provide computer C with a stable network connection. So, gaining access to computer C could serve as a bridge to computers B and A.

    What I found to be the most discouraging about the study was Lance's process for choosing a honeypot solution. He covers four of the most popular applications, including Back Officer Friendly, Specter, Honeyd, and Man Trap. There are many more available, but what they all seem to have in common is an overall lack of potential as a security solution. In fact, most of them offer services that only mimic those of other popular security applications. For example, let's take look at Back Officer Friendly.

    Back Officer Friendly is a light weight honeypot solution designed to run as a watcher application on the Windows operating system. Lance includes a copy of the software on the accompanying cd-rom for evaluation. Once installed, it can be set up to emulate a variety services on your computer, including telnet, http, or smtp. When an attacker tries to connect to one of these services, the honeypot recognizes the attempt and takes over instead. The attacker will be greeted with a fake reply appropriate to the particular service and begins to interact with the honeypot as if it were the real thing. The user is then notified of the attack. Back Officer Friendly logs the attacker's ip address, as well as any passwords he uses to try to log into the system.

    The drawback to this software is that it will only monitor services on your computer that are not being monitored or used by any other program. This means two things: 1) If you're using one of these services for any other purpose (for example, http to run a web server), then Back Officer Friendly cannot be employed to help secure it. 2) If you're not using one of these services and wish to have Back Officer Friendly monitor it for malicious activity, then you have to allow that service complete access through your firewall!

    I don't know about you, but I'm not comfortable allowing a service as dangerous as telnet through my firewall. Further more, the thought of granting access to any service that allows an attacker to interact with it sends a shiver up my spine. A more logical solution would be to allow the firewall itself to monitor these services. Most good firewalls offer the same logging abilities as Back Officer Friendly and will monitor the same services whether they are being used or not. Further more, most of the software mentioned in the book is extremely expensive and each one is designed to run either on or for a particular operating system. If you're running a tight network with multiple operating systems, then you're going to be spending a considerable amount of money just to invite hackers to come and play on it.

    The broad range of honeypot classifications adds yet another level of confusion to the decision making process. Lance classifies honeypots according to two main classifications, then three functionality classifications, and finally, two levels of interaction classifications. Lance defines the two main classifications like this, "...production honeypots provide value by protecting a specific resource or organization, such as acting like a burglar alarm and detecting attacks. Research honeypots are different; they add value by gaining information on a threat, such as capturing an attacker's keystrokes" (278). Fair enough. Let's move on to the three functionality classifications. They are prevention, detection and response.

    Prevention honeypots are designed to deter an attacker's attempts to compromise the system. For example, by flashing a warning banner at him to let him know that you are aware of his presence and are monitoring his actions on the network. Detection honeypots are designed to detect attacks that have penetrated your firewall and identify the attacker who is responsible. For example, by logging his ip addresses, keystrokes, and hop points. Response honeypots are designed to capture and reveal new techniques and exploits that are being used by the Blackhat community. This helps to increase the security community's incident response time by learning how hackers do what they do.

    This is where the confusion sets in. If the main goal of a production honeypot is prevention and detection, then what is the main goal of a production response honeypot? On the other hand, what would be the main goal of a research honeypot with the prevention and detection functions built into it? These three classifications seem absolutely redundant. In fact, they only exist to describe functions that are already present in the first two classifications. Let's look to the last two classifications for clarification.

    These have to do with the level of interaction that the attacker has with the honeypot itself. In a low level of interaction honeypot, the attacker will be shown a simple logon prompt or an http error page upon successfully connecting with a service. Neither will let him go any farther, and both log his attempts to do so. A high level of interaction honeypot will do the same with the exception of actually allowing the attacker to use the logon prompt. This will give him physical access to the system. Or if he connects through http, he will be presented with a real website designed just for him to vandalize. Again, these are functions that can be found in the first two classifications.

    This leads me to believe that there are only two classifications to choose from, production and research. A production honeypot is geared toward detecting and preventing attackers by limiting their level of interaction with the honeypot. A research honeypot is geared more toward understanding how attackers compromise computer systems by increasing their level of interaction with the honeypot. The other five classifications are simply functionalities that are contained within these two classifications. They are not separate entities that can be mixed and matched.

    In concluding, I found the book to be very educational. It is easy to read and offers a pleasant change from the jargon riddled prose found in most technical writing. Subjects such as networking fundamentals and hacking methods are all covered in detail using language that even the layman can understand. However, I simply do not agree with Lance's implementation and placement of these systems. The assertion that a honeypot can add an extra layer of security to complex network environments defies common security logic. In fact, they may actually hinder the ability of other security implementations and compromise the integrity of the entire network.
    Published by Matthew Austin

    (read this article on the yahoo voices page, and thought it would be great sharing with our honorable readers in here.)


  • “Be social” is the buzzword of recent years. No matter whether we are at home, in the gym, at work, or elsewhere, we are haunted by the need to be part of something online. We live alternative online lives, and we have dense networks of relationships that vary depending on the context (social, work, family).

    This human propensity for aggregation is now the foundation of the “social network” concept, a multi-dimensional interdependent community of actors or nodes. These actors/nodes are predominantly individuals, but can also be groups, companies, or even countries. Each relationship or linkage between a pair of nodes is typically a flow of material or non-material resources that may include social and/or emotional support, friendship, companionship, religious beliefs, time, information and interests, passions, expertise, money, business transactions, shared activity, etc


  • If you don't think your data is vulnerable, just search Google for “data breach,” and limit the search to news in the last month. You'll get more than 2,000 results. And while most may be redundant, there are enough unique stories to demonstrate that if your company network is accessible via the Internet, it is potentially under attack.


    In 2011, the top 10 reported data breaches netted hackers more than 170 million data records, including personally identifiable information (PII) such as names, addresses, and email addresses. More serious information including login credentials, credit card information, and medical treatment information was also exposed.

    While there is no data on the security employed on the systems from which this data was taken, the variety of companies and the volume of data compromised is significant enough to point out that no system should be considered safe.

    Even if you encrypt your data, you are only part of the way there. In its "2012 Data Protection & Breach Readiness Guide," the Online Trust Alliance (OTA) notes that data and disk encryption is just one of 12 security best-practices. But why isn't encryption by itself enough?

    Unfortunately, encryption is not enough because of the number and variety of attack vectors that are launched against your network every day. According to Verizon’s "2012 Data Breach Investigations Report," the vast majority of all breaches in 2011 were engineered through online attacks in the form of hacking, malware, or use of social engineering attacks -- an approach where human interaction, rather than software, is used as the attack vector.

    Let's look at the list of “Security Best-Practices” provided by the OTA (first column of the table below) with my added comments and thoughts as to the purpose behind the recommendation (second column). Please note, I am in no way affiliated with the Online Trust Alliance, and I had no input into the report cited.

    Table 1: Security Best-Practices & Commentary

    Recommendation Purpose / Comments
    1.            Use of Secure Socket Layer (SSL) for all data forms Limits network snooping – CAUTION: Because of known hacks to SSL, only TLS v 1.1 and 1.2 should be used.
    2.           Extended Validation of SSL certificates for all commerce and banking applications This is a consumer protection recommendation. It does nothing for securing data.

    3.           Data and Disk Encryption Limits data access.



    Disk Encryption, depending on its implementation, is either a software key or a hardware key that can encrypt the volume and/or the Master Boot Record (MBR).



    Data encryption, depending on implementation, can encrypt fields within a table or entire tables. The encryption can be symmetric or asymmetric. It prevents access to the information in the tables.
    4.           Multilayered firewall protection Limits cross tier network access.

    5.           Encryption of wireless routers Limits network entry points by blocking unauthorized wireless access.
    6.           Default disabling of shared folders Limits network entry points by removing common shares and their associated, known passwords.
    7.           Security risks of password re-set and identity verification security questions Limits unauthorized password resets or unintentional leaks of password information.

    8.           Upgrading browsers with integrated phishing and malware protection Limits an attack vector.


    9.           Email authentication to help detect malicious and deceptive email Limits an attack vector.
    10.       Automatic patch management for operating systems, applications and add-ons Reduces zero-day exploits or malware delivered as a software patch.

    11.       Inventory system access credentials Limits loss of network access.

    12.       Remote wiping of mobile devices Limits loss of data from stolen/lost/known compromised mobile devices.
    Source: Online Trust Alliance and Hendry Betts III
    In the Purpose/Comments column above, I used the verb “limits” intentionally because nothing completely prevents users from responding to social engineering, phishing attacks via email or Web sites, or malicious downloads. User education is, in my opinion, the best tool to limit the impact of these types of attacks.

    Both my personal experience and the best-practices outlined in the OTA report show that there is no single silver bullet for data protection. The best-practices to protect your company's data engage the network, the data, and the users themselves. And, ultimately, I think the absolute best-practice is to expect a breach, actively monitor your networks, and educate the users.


    Article published on: The Internet Evolution Website.




  • Searching on the Internet today can be compared to dragging a net across the surface of the ocean.

    While a great deal may be caught in the net, there is still a wealth of information that is deep, and therefore, missed. The reason is simple: Most of the Web's information is buried far down on dynamically generated sites, and standard search engines never find it.

    According to Wikipedia, The Deep Web (also called the Deepnet, the Invisible Web, the Undernet or the hidden Web) is World Wide Web content that is not part of the Surface Web, which is indexable by standard search engines.

    The Deep Web is the set of information resources on the World Wide Web not reported by normal search engines.
    According to several researches, the principal search engines index only a small portion of the overall web content, the remaining part is unknown to the majority of web users.


    What do you think if you were told that under our feet, there is a world larger than ours and much more crowded? We will literally be shocked, and this is the reaction of those individual who can understand the existence of the Deep Web, a network of interconnected systems, are not indexed, having a size hundreds of times higher than the current web, around 500 times.
    Very exhaustive is the definition provided by the founder of BrightPlanet, Mike Bergman, that compared searching on the Internet today to dragging a net across the surface of the ocean: a great deal may be caught in the net, but there is a wealth of information that is deep and therefore missed.


    Ordinary search engines find content on the web using software called “crawlers”. This technique is ineffective for finding the hidden resources of the Web that could be classified into the following categories:


    • Dynamic content: dynamic pages which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge.
    •  
    • Unlinked content: pages which are not linked to by other pages, which may prevent Web crawling programs from accessing the content. This content is referred to as pages without backlinks (or inlinks).
    •  
    • Private Web: sites that require registration and login (password-protected resources).
    • Contextual Web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence).
    •  
    • Limited access content: sites that limit access to their pages in a technical way (e.g., using the Robots Exclusion Standard, CAPTCHAs, or no-cache Pragma HTTP headers which prohibit search engines from browsing them and creating cached copies).
    •  
    • Scripted content: pages that are only accessible through links produced by JavaScript as well as content dynamically downloaded from Web servers via Flash or Ajax solutions.
    •  
    • Non-HTML/text content: textual content encoded in multimedia (image or video) files or specific file formats not handled by search engines.
    •  
    • Text content using the Gopher protocol and files hosted on FTP that are not indexed by most search engines. Engines such as Google do not index pages outside of HTTP or HTTPS.

    A parallel web that has a much wider number of information represents an invaluable resource for private companies, governments, and especially cybercrime. In the imagination of many persons, the DeepWeb term is associated with the concept of anonymity that goes with criminal intents the cannot be pursued because submerged in an inaccessible world.
    As we will see this interpretation of the Deep Web is deeply wrong, we are facing with a network definitely different from the usual web but in many ways repeats the same issues in a different sense.



    Accessing the Deep Web
      
    To access the deep web, We will need to make use of the Tor Network.


    What is a Tor? How to preserve the anonymity?
    Tor is the acronym of “The onion router”, a system implemented to enable online anonymity. Tor client software routes Internet traffic through a worldwide volunteer network of servers hiding user’s information eluding any activities of monitoring.
    As usually happen, the project was born in military sector, sponsored the US Naval Research Laboratory and from 2004 to 2005 it was supported by the Electronic Frontier Foundation.
    Actually the software is under development and maintenance of Tor Project. A user that navigate using Tor it’s difficult to trace ensuring his privacy because the data are encrypted multiple times passing through nodes, Tor relays, of the network.

    Connecting to the Tor network

    Imagine a typical scenario where Alice desire to be connected with Bob using the Tor network. Let’s see step by step how it is possible.

    She makes an unencrypted connection to a centralized directory server containing the addresses of Tor nodes. After receiving the address list from the directory server the Tor client software will connect to a random node (the entry node), through an encrypted connection. The entry node would make an encrypted connection to a random second node which would in turn do the same to connect to a random third Tor node. The process goes on until it involves a node (exit node) connected to the destination.
    Consider that during Tor routing, in each connection, the Tor node are randomly chosen and the same node cannot be used twice in the same path.

    To ensure anonymity the connections have a fixed duration. Every ten minutes to avoid statistical analysis that could compromise the user’s privacy, the client software changes the entry node
    Up to now we have considered an ideal situation in which a user accesses the network only to connect to another. To further complicate the discussion, in a real scenario, the node Alice could in turn be used as a node for routing purposes with other established connections between other users.

    A malevolent third party would not be able to know which connection is initiated as a user and which as node making impossible the monitoring of the communications.

    After this necessary parenthesis on Tor network routing, we are ready to enter the Deep Web simply using the Tor software from the official web site of the project. Tor is able to work on all the existing platforms and many add-ons make simple they integration in existing applications, including web browsers. Despite the network has been projected to protect user’s privacy, to be really anonymous it’s suggested to go though a VPN.

    A better mode to navigate inside the deep web is to use the Tails OS distribution which is bootable from any machine without leaving any trace on the host. Once the Tor Bundle is installed, it comes with its own portable Firefox version, ideal for anonymous navigation due an appropriate control of installed plugins, in the commercial version in fact common plugins could expose our identity.
    Once inside the network, where is it possible to go and what is it possible to find?

    Well once inside the deep web, we must understand that the navigation is quite different from ordinary web, every research is more complex due the absence of indexing of the content.

    A user that start it’s navigation in the Deep Web have to know that a common way to list the content is to adopt collection of Wikis and BBS-like sites which have the main purpose to aggregate links categorizing them in more suitable groups of consulting. Another difference that user has to take in mind is that instead of classic extensions (e.g. .com, .net, .org) the domains in the Deep Web generally end with the .onion suffix.

    Following a short list of links that have made famous the Deep Web published on Pastebin


    Cleaned Hidden Wiki should be a also a good starting point for the first navigations
    http://3suaolltfj2xjksb.onion/hiddenwiki/index.php/Main_Page
    Be careful, some content are labeled with common used tag such as CP= child porn, PD is pedophile, stay far from them.

    The Deep Web is considered the place where every thing is possible, you can find every kind of material and services for sale, most of them illegal. The hidden web offers to cybercrime great business opportunity, hacking services, malware, stolen credit cards, weapons.

    The deep Web is estimated to be about 500 times larger than the surface Web, with, on average, about three times higher quality based on our document scoring methods on a per-document basis. On an absolute basis, total deep Web quality exceeds that of the surface Web by thousands of times. Total number of deep Web sites likely exceeds 200,000 today and is growing rapidly.[39] Content on the deep Web has meaning and importance for every information seeker and market. More than 95% of deep Web information is publicly available without restriction. The deep Web also appears to be the fastest growing information component of the Web.

    We all know the potentiality of the e-commerce in ordinary web and its impressive growth in last couple of years, well now imagine the Deep Web market that is more that 500 times bigger and where there is no legal limits on the odds to sell. We are facing with amazing business controlled by ciber criminal organizations.
    Speaking of dark market we cannot avoid to mention Silk Road web site, an online marketplace located in the Deep Web, the majority of its products are derived from illegal activities. Of course it’s not the only one, many other markets are managed to address specify products, believe me, many of them are terrifying.

    The figure below displays the distribution of deep Web sites by type of content.

    Figure 6. Distribution of Deep Web Sites by Content Type


    More than half of all deep Web sites feature topical databases. Topical databases plus large internal site documents and archived publications make up nearly 80% of all deep Web sites. Purchase-transaction sites — including true shopping sites with auctions and classifieds — account for another 10% or so of sites. The other eight categories collectively account for the remaining 10% or so of sites.


    Most transactions on the Deep Web accept BitCoinsystem for payments allowing the purchase of any kind of products preserving the anonymity of the transaction, encouraging the development of trade in respect to any kind of illegal activities. We are being faced with an autonomous system that allows the exercise of criminal activities while ensuring the anonymity of transactions and the inability to track down the criminals.


     The most important findings from the analysis of the deep Web are that there is massive and meaningful content not discoverable with conventional search technology and that there is a nearly uniform lack of awareness that this critical content even exists.

    I will provide more information regarding this topic in the near future. In the meantime, find below a summary and some key facts regarding the Deep Web:


    · Public information on the deep Web is currently 400 to 550 times larger than the commonly defined World Wide Web.

    · The deep Web contains 7,500 terabytes of information compared to 19 terabytes of information in the surface Web.


    · The deep Web contains nearly 550 billion individual documents compared to the 1 billion of the surface Web.


    · More than 200,000 deep Web sites presently exist.


    · Sixty of the largest deep-Web sites collectively contain about 750 terabytes of information — sufficient by themselves to exceed the size of the surface Web forty times.


    · The deep Web is the largest growing category of new information on the Internet.


    · Deep Web sites tend to be narrower, with deeper content, than conventional surface sites.


    · Total quality content of the deep Web is 1,000 to 2,000 times greater than that of the surface Web.


    · Deep Web content is highly relevant to every information need, market, and domain.


    · More than half of the deep Web content resides in topic-specific databases.


    · A full ninety-five per cent of the deep Web is publicly accessible information — not subject to fees or subscriptions.


  • Anybody looking to strike up a heated debate among technologists need only ask, "Is the cloud private?" There is an old adage that, if you have to ask, the answer is "no." However, one can expect all kinds of responses to that simple question, from the technically savvy to the academic to the emotional. The simple answer: yes and no.

    The cloud privacy debate hinges in part on the question of whether data is more private when it is stored locally or encrypted remotely. One could argue that, when an enterprise turns over its computing resources to a service provider, it will get all of the benefits of a full-time IT staff, multiple connections to the Internet, and 24/7/365 network management. The skeptic, however, will say that putting all of one's security eggs in one basket (in this case, the cloud provider) is problematic.

    If your company opts for cloud services, remember that the cloud provider's first and most important responsibility is to make a profit and stay in business. As a result, a customer on a multitenant server could find its security impacted by others collocated on that server, though it remains within the security agreement it signed with the provider.

    For example, let's say a company co-hosts its servers on the same physical box as 10 other companies at a service provider. Even though the provider might be doing everything right technically and legally, one of those other companies on the co-hosted server might be doing something illegal. If law enforcement issues a subpoena to obtain all that company's data, it could take your data, as well -- without you knowing about it in advance. In fact, some subpoenas can state specifically that the provider is not allowed to tell the target or others impacted by the investigation that their data is being reviewed. This could affect your business operations if the virtual or physical server on which your company's data is hosted is taken down by law enforcement.

    There are other privacy vulnerabilities. Let's assume the service provider is using virtualization to separate each of the companies on the server. If one of the companies on that server were to go rogue and breach the hypervisor, it could gain access to the root and, therefore, all of the virtual servers connected to that hypervisor. The attacker could gain full access to all virtual machines on the server (including yours), steal private data, and be gone before the hosting provider realizes the breach.

    An IT manager can avoid these vulnerabilities by simply hosting data on a dedicated server. But that will not take advantage of the benefits of the cloud, including the ability to move your data quickly to various servers for load balancing, disaster recovery, and more.

    This does not mean that multitenant or cloud computing is not safe. Rather, good security practices are always necessary, regardless of where data is stored. For many companies, a cloud service provider can offer a higher level of security than a company could offer itself. A risk analysis that compares housing data locally or in the cloud will answer the basic question of whether to employ a service provider. If you're better off with a provider, bring in a strong negotiator when you draft the contract to ensure that the provider keeps your interests, and not its own, up front.

    Remember that if there is a breach, regardless of whether you use a cloud provider or host data yourself, your customers will blame you for data loss. Your reputation is at stake. Since your ability to secure the cloud ends at the perimeter of your network, make sure your SLA and security agreements address technology over which you have no control. And by all means, make sure everything in the cloud is encrypted securely. There is no excuse for losing unencrypted data to a breach, locally or remotely.

  • Microsoft today revealed a new look for its corporate logo, marking the first time in 25 years the company has changed its image and the fifth time overall. The new logo features the name "Microsoft" in the Segoe font — a proprietary font used in the firm’s products and marketing for several years -- alongside a multicolored Windows symbol intended to "signal the heritage but also signal the future.”


    Microsoft is preparing to launch a range of products this fall, including Windows 8 and a new Surface tablet running the OS, as well as Windows Phone 8. The software giant's new logo reflects a change in its products’ look and feel which relies heavily on a tile-based UI formerly known as Metro -- now it’s just called Windows 8 Style. It also arrives just months after introducing a new single-colored Windows 8 logo.


    The new corporate image will begin its rollout today, appearing on Microsoft.com and the company’s Twitter and Facebook accounts, followed by new TV commercials airing over the news few weeks.
    Speaking with Seattle Times, Microsoft's general manager of brand strategy Jeff Hansen also commented on the company’s past logos and their influences. The first logo, used from 1975 to 1979, featured a disco-y typeface with the words Micro on one line and Soft below it, reflecting how co-founders Bill Gates and Paul Allen came up with the original company name using the words "microcomputers" and "software."


    The second logo has briefly used between 1980 and 1981 and its jagged edges, strong diagonals typography reflected the computer and video-game culture of the time. The third logo was used from 1982 to 1986 introduced a stylized letter "o" with lines through it, while some tweaks in 1987 resulted in the logo most people are familiar with, featuring a slice in the first “o” and a connection between the letters "f" and “t”.


  • By the time you see this headline, the first question that pops into your mind is: Could a Car get a Computer Virus? Well the answer to that question is a capital YES!

    In the past, car viruses were rare because one of the only ways to infect a vehicle was by a mechanic and via the computer or software he used to diagnose problems with the car.

    More than 100 Texas drivers could have been excused for thinking that they had really horrendous luck or -- at least for the more superstitious among them -- that their vehicles were possessed by an evil spirit. That's because in 2010, more than 100 customers of a dealership called Texas Auto Center found their efforts to start their cars fruitless, and even worse, their car alarms blared ceaselessly, stopped only when the batteries were removed from the vehicles [source: Shaer].
    What seemed to some to be a rash of coincidence and mechanical failure turned out to be the work of a disgruntled employee-turned-hacker. Omar Ramos-Lopez, who had been laid off by the Texas Auto Center, decided to exact some revenge on his former Austin, Texas employer by hacking into the company's Web-based vehicle immobilization system, typically used to disable the cars of folks who had stopped making mandatory payments [source: Shaer]. Besides creating plenty of mayhem and generating a flood of angry customer complaints, Ramos-Lopez, who was eventually arrested, highlighted some of the vulnerabilities of our increasingly computer-dependent vehicles from a skilled and motivated hacker.
    Although Ramos-Lopez's attack generated a lot of attention, his hacking was fairly tame compared to the possibilities exposed by analysts at a number of different universities. Indeed, in 2010, researchers from the University of Washington and the University of California at San Diego proved that they could hack into the computer systems that control vehicles and remotely have power over everything from the brakes to the heat to the radio [source: Clayton]. Researchers from Rutgers University and the University of South Carolina also demonstrated the possibility of hijacking the wireless signals sent out by a car's tire pressure monitoring system, enabling hackers to monitor the movements of a vehicle.
    Taken together, these events show that cars are increasingly vulnerable to the sort of viruses (also known as malware) introduced by hackers that routinely bedevil, frustrate and harm PC users everywhere. Obviously, this has real implications for drivers, although the researchers themselves point out that hackers have not yet victimized many people. But the ramifications are clear.
    "If your car is infected, then anything that the infected computer is responsible for is infected. So, if the computer controls the windows and locks, then the virus or malicious code can control the windows and locks," says Damon Petraglia, who is director of forensic and information security services at Chartstone Consulting and has trained law enforcement officers in computer forensics. "Same goes for steering and braking."




    As high-technology continues to creep into horseless carriages everywhere, there's one thing we can all count on: abuse of that technology. According to Reuters, Intel's "top hackers" are on the case though, poring over the software which powers the fanciest of automobile technology in hopes of discovering (and dashing) various bugs and exploits.
    Except under the most specific of scenarios, the damaging results from an attack against an unsuspecting user's personal computer are often limited. Hackers may be able to cripple a computer, invade a user's privacy or even steal someone's identity. Causing personal injury or death though, is typically out of the question. However, with an increasing amount of technology and software proliferating modern vehicles, this could all change. 
    "You can definitely kill people," asserts John Bumgarner, CTO of a non-profit which calls itself the U.S. Cyber Consequences Unit.
    As outlined in the following publication, Experimental Security Analysis of a Modern Automobile (pdf), researchers have already shown that a clever virus is capable of releasing or engaging brakes on a whim, even at high speeds. Such harrowing maneuvers could potentially extinguish the lives of both its occupants and others involved in the resulting accident. On certain vehicles, researchers were also able to lock and unlock doors, start and disable the engine and toggle the headlights off and on.
    Ford spokesman Alan Hall assures us, "Ford is taking the threat very seriously and investing in security solutions that are built into the product from the outset". Ford has been an industry leader in adopting advanced automotive technologies.
    Thus far, there have been no reported incidents of injury or death caused by automobile hacking. That's according to SAE International, a major standards committee for automotive and aerospace industries.
    When asked by Reuters whether or not there had been any such reports, most manufacturers declined to comment. However, McAfee executive Bruce Snell claims that automakers are still very concerned about it. Snell admits, "I don't think people need to panic now. But the future is really scary." McAfee, which is now owned by Intel, is the division of Intel investigating automobile cyber security.

    We can only hope and pray that solution arrives early enough before this viruses are being released en-masse which could endanger the lives of innocent car owners.

  • 01. Introduction

    Windows 8 vs. Windows 7 Performance

    Unless you have been living under a rock, there is a good chance you have caught wind of Microsoft’s latest operating system. Those eager to see what the new OS is all about had their first chance to take a peek back in February when Microsoft released the Windows 8 Consumer Preview.

    More than a million downloads took place within the first day of the preview's release, but users were in for a shock as major changes awaited them. By far the most controversial has been the replacement of the Start menu for the new Start screen, and inherently, Microsoft's decision of doing away with the Start button on desktop mode.

    For the first time since Windows 95 the Start button is no longer a centerpiece of the operating system, in fact it's gone for good.

    On the final version of Windows 8, clicking the bottom-left corner of the screen -- where the Start button would normally be located -- launches the Metro interface (or whatever it is they are calling it now). The new tile-based interface is radically different from anything used on a Windows desktop and resembles what we've successfully seen working on the latest iterations of Windows Phone.
    However, many users seem to be struggling to get their head around it. Personally, in spite of using Windows 8 for several months, I'm still undecided if I like the new interface or not. It certainly takes some time getting used to and for that reason I'm not jumping to conclusions just yet.

    My opinion aside, there are countless users that have already shunned the new interface and many of them made their thoughts heard in our recent editorial "Windows 8: Why the Start Menu's Absence is Irrelevant". Yet, while everyone loves to try and remind Microsoft about how much of a flop some previous operating systems such as ME and Vista were, and that Windows 8 will be no better, we believe the new operating system still has a lot to offer.

    Microsoft's PR machine has been hard at work over the past few months, trying to explain the numerous improvements Windows 8 has received on the backend. The good news is that it shows.
    Coming from the two previews and now the final release of Windows 8, the OS seems smoother than Windows 7. It has been well documented that Windows 8 starts up and shuts down faster, so that wasn’t much of a surprise. Maybe it's the inevitability of bloating an OS installation that is a couple of years old (in the case of Windows 7), but there's this sense of when you move from a hard drive to an SSD, things just appear slightly quicker. This was surprising as I had not expected to notice much of a difference for general usage.

    Of course, this is merely an informal observation and we are here to back up those impressions with hard numbers (read: lots of benchmarks in the coming pages).

    Back when Vista first arrived I remember comparing how it performed to XP and being extremely disappointed with the results. Vista was generally rough around the edges and that included drivers, so gaming and productivity applications were more often than not slower in the new OS.
    For comparing Windows 7 and Windows 8 we will measure and test the performance of various aspects of the operating system including: boot up and shutdown times, file copying, encoding, browsing, gaming and some synthetic benchmarks. Without further ado...


    02. Benchmarks: Boot Up, PCMark, Browser, Encoding

    The following benchmarks were conducted using our high-end test system which features the Intel Core i7-3960X processor, 16GB of DDR3-1866 memory and a GeForce GTX 670 graphics card, all on the new Asrock X79 Extreme11 motherboard. The primary drive used was the Samsung Spinpoint F1 1TB, while the Kingston SSDNow V+ 200 256GB SSD was used for the AS SSD Benchmark and Windows Explorer tests.
    Using the Samsung Spinpoint F1 1TB HDD we saw OS boot up times reduced by 33%. Going from 27 seconds with Windows 7 to just 18 seconds with Windows 8 is obviously a significant improvement and it means SSD users will be able to load Windows 8 in a matter of a few seconds.
    A similar improvement is seen when measuring shutdown time. Windows 8 took 8 seconds versus the 12 seconds it took an identically configured Windows 7 system.
    We tested wake-up from sleep times using a standard hard disk drive. Windows 8 shows a marked improvement here as well, however we still thought 10 seconds was too long. We then tested Windows 8 using our SSD and the exact same 10 second window was repeated. With <5 second wake up from sleep times being touted by today's Windows 7 laptops, we imagine the operating system detects when you are using a laptop and that there are special power saving features on a mobile system that make a difference.
    3Dmark 11 is used primarily to measure 3D graphics performance, meaning graphics card drivers play a vital role here. Still the performance was very similar on both operating systems, though the more mature Windows 7 was slightly faster.
    Multimedia performance is said to be another of the strengths of Windows 8, and as you can see when testing with PCmark 7, it was 9% faster than its predecessor.
    Using the Mozilla Kraken benchmark we compared the performance of Windows 7 using IE9 and Windows 8 with IE10. As you can see the desktop version of the IE10 browsers on Windows 8 delivered virtually the same performance as IE9 on Windows 7. The Metro version of IE10 was 3% faster, reducing the completion time to just 3926ms.
    Update: We've added benchmarks for the latest versions of Firefox and Chrome on both operating systems. Besides beating IE to the punch on these synthetic benchmarks, the take away here is that both browsers tend to perform slightly better under Windows 8.
    Google V8 is another browser test we used. In this case it gives a score, so the larger the number the better. Again we see that the desktop version of the IE10 browser in Windows 8 is very similar to IE9 from Windows 7. Though this time the Metro version is actually much slower, lagging behind by a 21% margin.
    Chrome and Firefox take a huge lead compared to IE, and on both counts the browsers behave better running on Windows 8.
    PCmark7 showed us that Windows 8 was faster than Windows 7 in multimedia type tests and this has been confirmed by the x264 HD Benchmark 5.0 which favored Microsoft’s latest operating system by a 6% margin in the first pass test.
    Although the margin was very small when testing with HandBrake, we still found Windows 8 to be 1.5% faster than Windows 7.




    03. Benchmarks: Excel, File Copy, Gaming

    Comparing Windows 8 armed with the new Office 2013 suite we found that it was 10% faster when running our Excel MonteCarlo test against Windows 7 using Office 2010. Even when comparing apples to apples, with both operating systems running Excel 2010, Windows 8 is more efficient using the CPU cycles to its benefit on our MonteCarlo simulation.
    The AS SSD Benchmark was used to measure the performance of the Kingston SSDNow V+ 200 256GB SSD. Here we see that Windows 8 and Windows 7 delivered virtually the same sequential read and write performance.
    Despite delivering similar sequential read/write performance we found in the ISO benchmark that Windows 7 was 9% faster based on an average of three runs.
    Windows 8 features a new Explorer interface for transferring files, which provides more accurate data on transfer speeds and estimated time of completion. It also stacks multiple transfer windows together. The UI is awesome, but on the performance side of things there is little difference when transferring multiple large files together or individually. Windows 8 and Windows 7 deliver similar performance in both situations.
    When transferring thousands of smaller files we also found that Windows 7 and Windows 8 offer the same performance.
    Finishing up we looked at gaming performance using Just Cause 2, Hard Reset and Battlefield 3. Similar to the previous 3DMark test, this relies on graphics drivers more than anything else. As you can see both operating systems provide similar performance with a very slight edge to Windows 7's advantage.



    04.  Benchmarks: Excel, File Copy, Gaming

    Comparing Windows 8 armed with the new Office 2013 suite we found that it was 10% faster when running our Excel MonteCarlo test against Windows 7 using Office 2010. Even when comparing apples to apples, with both operating systems running Excel 2010, Windows 8 is more efficient using the CPU cycles to its benefit on our MonteCarlo simulation.
    The AS SSD Benchmark was used to measure the performance of the Kingston SSDNow V+ 200 256GB SSD. Here we see that Windows 8 and Windows 7 delivered virtually the same sequential read and write performance.
    Despite delivering similar sequential read/write performance we found in the ISO benchmark that Windows 7 was 9% faster based on an average of three runs.
    Windows 8 features a new Explorer interface for transferring files, which provides more accurate data on transfer speeds and estimated time of completion. It also stacks multiple transfer windows together. The UI is awesome, but on the performance side of things there is little difference when transferring multiple large files together or individually. Windows 8 and Windows 7 deliver similar performance in both situations.
    When transferring thousands of smaller files we also found that Windows 7 and Windows 8 offer the same performance.
    Finishing up we looked at gaming performance using Just Cause 2, Hard Reset and Battlefield 3. Similar to the previous 3DMark test, this relies on graphics drivers more than anything else. As you can see both operating systems provide similar performance with a very slight edge to Windows 7's advantage.




    05.  Faster, Slower, Better?

    It's often been the case with new Windows OS releases that it takes some time before performance is up to par or above the level of its predecessor. Possibly the most extreme example I can recall was the move from Windows XP to Windows Vista, though that was partly due to immature drivers on the all-new platform, and partly to do with the fact that Vista was a resource hog.

    Microsoft seemed to hit full stride with Windows 7, developing a fast and efficient operating system. Thankfully it seems Windows 8 continues to build on that pattern as we found it to be on par with and ocassionally faster than 7.

    The improvements that have been made to startup and shutdown times are self-evident, and no doubt a major focus on the new OS' development as this will particularly benefit laptop and tablet users. Another notable improvement was seen in multimedia performance. This was first observed when running PCMark 7 and later confirmed when we ran x264 HD Benchmark 5.0 and our HandBrake encoding test.

    Most of the other tests saw little to no difference between the two operating systems. This was especially true for the gaming benchmarks, but most surprising on the IE tests which we figured would have shown a big advantage for IE10, but not so.

    Both AMD and Nvidia seem to be on top of their drivers for Windows 8 from day zero, as we were able to achieve the same level of performance in Windows 8 as we did in Windows 7 using the GeForce GTX 670 and the Radeon HD 6570.
     
    From a performance standpoint Windows 8 appears to offer a solid foundation from the get-go. Although there are only a few select areas where it is faster than Windows 7, we are pleased that it's able to match it everywhere else.

    Looking beyond benchmarks, Windows 8 appears more polished than Windows 7, even if you plan to live on the desktop and aren't too fond of the Start screen, general usage is smoother and appears to be faster on Windows 8, which I found most noticeable on our somewhat underpowered Athlon II X4 system. If anything, it's a great start, now the Metro/Modern style will have to prove itself as a cross-platform OS that marries desktop, laptop and tablet PCs.