• “Be social” is the buzzword of recent years. No matter whether we are at home, in the gym, at work, or elsewhere, we are haunted by the need to be part of something online. We live alternative online lives, and we have dense networks of relationships that vary depending on the context (social, work, family).

    This human propensity for aggregation is now the foundation of the “social network” concept, a multi-dimensional interdependent community of actors or nodes. These actors/nodes are predominantly individuals, but can also be groups, companies, or even countries. Each relationship or linkage between a pair of nodes is typically a flow of material or non-material resources that may include social and/or emotional support, friendship, companionship, religious beliefs, time, information and interests, passions, expertise, money, business transactions, shared activity, etc

  • If you don't think your data is vulnerable, just search Google for “data breach,” and limit the search to news in the last month. You'll get more than 2,000 results. And while most may be redundant, there are enough unique stories to demonstrate that if your company network is accessible via the Internet, it is potentially under attack.

    In 2011, the top 10 reported data breaches netted hackers more than 170 million data records, including personally identifiable information (PII) such as names, addresses, and email addresses. More serious information including login credentials, credit card information, and medical treatment information was also exposed.

    While there is no data on the security employed on the systems from which this data was taken, the variety of companies and the volume of data compromised is significant enough to point out that no system should be considered safe.

    Even if you encrypt your data, you are only part of the way there. In its "2012 Data Protection & Breach Readiness Guide," the Online Trust Alliance (OTA) notes that data and disk encryption is just one of 12 security best-practices. But why isn't encryption by itself enough?

    Unfortunately, encryption is not enough because of the number and variety of attack vectors that are launched against your network every day. According to Verizon’s "2012 Data Breach Investigations Report," the vast majority of all breaches in 2011 were engineered through online attacks in the form of hacking, malware, or use of social engineering attacks -- an approach where human interaction, rather than software, is used as the attack vector.

    Let's look at the list of “Security Best-Practices” provided by the OTA (first column of the table below) with my added comments and thoughts as to the purpose behind the recommendation (second column). Please note, I am in no way affiliated with the Online Trust Alliance, and I had no input into the report cited.

    Table 1: Security Best-Practices & Commentary

    Recommendation Purpose / Comments
    1.            Use of Secure Socket Layer (SSL) for all data forms Limits network snooping – CAUTION: Because of known hacks to SSL, only TLS v 1.1 and 1.2 should be used.
    2.           Extended Validation of SSL certificates for all commerce and banking applications This is a consumer protection recommendation. It does nothing for securing data.

    3.           Data and Disk Encryption Limits data access.

    Disk Encryption, depending on its implementation, is either a software key or a hardware key that can encrypt the volume and/or the Master Boot Record (MBR).

    Data encryption, depending on implementation, can encrypt fields within a table or entire tables. The encryption can be symmetric or asymmetric. It prevents access to the information in the tables.
    4.           Multilayered firewall protection Limits cross tier network access.

    5.           Encryption of wireless routers Limits network entry points by blocking unauthorized wireless access.
    6.           Default disabling of shared folders Limits network entry points by removing common shares and their associated, known passwords.
    7.           Security risks of password re-set and identity verification security questions Limits unauthorized password resets or unintentional leaks of password information.

    8.           Upgrading browsers with integrated phishing and malware protection Limits an attack vector.

    9.           Email authentication to help detect malicious and deceptive email Limits an attack vector.
    10.       Automatic patch management for operating systems, applications and add-ons Reduces zero-day exploits or malware delivered as a software patch.

    11.       Inventory system access credentials Limits loss of network access.

    12.       Remote wiping of mobile devices Limits loss of data from stolen/lost/known compromised mobile devices.
    Source: Online Trust Alliance and Hendry Betts III
    In the Purpose/Comments column above, I used the verb “limits” intentionally because nothing completely prevents users from responding to social engineering, phishing attacks via email or Web sites, or malicious downloads. User education is, in my opinion, the best tool to limit the impact of these types of attacks.

    Both my personal experience and the best-practices outlined in the OTA report show that there is no single silver bullet for data protection. The best-practices to protect your company's data engage the network, the data, and the users themselves. And, ultimately, I think the absolute best-practice is to expect a breach, actively monitor your networks, and educate the users.

    Article published on: The Internet Evolution Website.

  • Searching on the Internet today can be compared to dragging a net across the surface of the ocean.

    While a great deal may be caught in the net, there is still a wealth of information that is deep, and therefore, missed. The reason is simple: Most of the Web's information is buried far down on dynamically generated sites, and standard search engines never find it.

    According to Wikipedia, The Deep Web (also called the Deepnet, the Invisible Web, the Undernet or the hidden Web) is World Wide Web content that is not part of the Surface Web, which is indexable by standard search engines.

    The Deep Web is the set of information resources on the World Wide Web not reported by normal search engines.
    According to several researches, the principal search engines index only a small portion of the overall web content, the remaining part is unknown to the majority of web users.

    What do you think if you were told that under our feet, there is a world larger than ours and much more crowded? We will literally be shocked, and this is the reaction of those individual who can understand the existence of the Deep Web, a network of interconnected systems, are not indexed, having a size hundreds of times higher than the current web, around 500 times.
    Very exhaustive is the definition provided by the founder of BrightPlanet, Mike Bergman, that compared searching on the Internet today to dragging a net across the surface of the ocean: a great deal may be caught in the net, but there is a wealth of information that is deep and therefore missed.

    Ordinary search engines find content on the web using software called “crawlers”. This technique is ineffective for finding the hidden resources of the Web that could be classified into the following categories:

    • Dynamic content: dynamic pages which are returned in response to a submitted query or accessed only through a form, especially if open-domain input elements (such as text fields) are used; such fields are hard to navigate without domain knowledge.
    • Unlinked content: pages which are not linked to by other pages, which may prevent Web crawling programs from accessing the content. This content is referred to as pages without backlinks (or inlinks).
    • Private Web: sites that require registration and login (password-protected resources).
    • Contextual Web: pages with content varying for different access contexts (e.g., ranges of client IP addresses or previous navigation sequence).
    • Limited access content: sites that limit access to their pages in a technical way (e.g., using the Robots Exclusion Standard, CAPTCHAs, or no-cache Pragma HTTP headers which prohibit search engines from browsing them and creating cached copies).
    • Scripted content: pages that are only accessible through links produced by JavaScript as well as content dynamically downloaded from Web servers via Flash or Ajax solutions.
    • Non-HTML/text content: textual content encoded in multimedia (image or video) files or specific file formats not handled by search engines.
    • Text content using the Gopher protocol and files hosted on FTP that are not indexed by most search engines. Engines such as Google do not index pages outside of HTTP or HTTPS.

    A parallel web that has a much wider number of information represents an invaluable resource for private companies, governments, and especially cybercrime. In the imagination of many persons, the DeepWeb term is associated with the concept of anonymity that goes with criminal intents the cannot be pursued because submerged in an inaccessible world.
    As we will see this interpretation of the Deep Web is deeply wrong, we are facing with a network definitely different from the usual web but in many ways repeats the same issues in a different sense.

    Accessing the Deep Web
    To access the deep web, We will need to make use of the Tor Network.

    What is a Tor? How to preserve the anonymity?
    Tor is the acronym of “The onion router”, a system implemented to enable online anonymity. Tor client software routes Internet traffic through a worldwide volunteer network of servers hiding user’s information eluding any activities of monitoring.
    As usually happen, the project was born in military sector, sponsored the US Naval Research Laboratory and from 2004 to 2005 it was supported by the Electronic Frontier Foundation.
    Actually the software is under development and maintenance of Tor Project. A user that navigate using Tor it’s difficult to trace ensuring his privacy because the data are encrypted multiple times passing through nodes, Tor relays, of the network.

    Connecting to the Tor network

    Imagine a typical scenario where Alice desire to be connected with Bob using the Tor network. Let’s see step by step how it is possible.

    She makes an unencrypted connection to a centralized directory server containing the addresses of Tor nodes. After receiving the address list from the directory server the Tor client software will connect to a random node (the entry node), through an encrypted connection. The entry node would make an encrypted connection to a random second node which would in turn do the same to connect to a random third Tor node. The process goes on until it involves a node (exit node) connected to the destination.
    Consider that during Tor routing, in each connection, the Tor node are randomly chosen and the same node cannot be used twice in the same path.

    To ensure anonymity the connections have a fixed duration. Every ten minutes to avoid statistical analysis that could compromise the user’s privacy, the client software changes the entry node
    Up to now we have considered an ideal situation in which a user accesses the network only to connect to another. To further complicate the discussion, in a real scenario, the node Alice could in turn be used as a node for routing purposes with other established connections between other users.

    A malevolent third party would not be able to know which connection is initiated as a user and which as node making impossible the monitoring of the communications.

    After this necessary parenthesis on Tor network routing, we are ready to enter the Deep Web simply using the Tor software from the official web site of the project. Tor is able to work on all the existing platforms and many add-ons make simple they integration in existing applications, including web browsers. Despite the network has been projected to protect user’s privacy, to be really anonymous it’s suggested to go though a VPN.

    A better mode to navigate inside the deep web is to use the Tails OS distribution which is bootable from any machine without leaving any trace on the host. Once the Tor Bundle is installed, it comes with its own portable Firefox version, ideal for anonymous navigation due an appropriate control of installed plugins, in the commercial version in fact common plugins could expose our identity.
    Once inside the network, where is it possible to go and what is it possible to find?

    Well once inside the deep web, we must understand that the navigation is quite different from ordinary web, every research is more complex due the absence of indexing of the content.

    A user that start it’s navigation in the Deep Web have to know that a common way to list the content is to adopt collection of Wikis and BBS-like sites which have the main purpose to aggregate links categorizing them in more suitable groups of consulting. Another difference that user has to take in mind is that instead of classic extensions (e.g. .com, .net, .org) the domains in the Deep Web generally end with the .onion suffix.

    Following a short list of links that have made famous the Deep Web published on Pastebin

    Cleaned Hidden Wiki should be a also a good starting point for the first navigations
    Be careful, some content are labeled with common used tag such as CP= child porn, PD is pedophile, stay far from them.

    The Deep Web is considered the place where every thing is possible, you can find every kind of material and services for sale, most of them illegal. The hidden web offers to cybercrime great business opportunity, hacking services, malware, stolen credit cards, weapons.

    The deep Web is estimated to be about 500 times larger than the surface Web, with, on average, about three times higher quality based on our document scoring methods on a per-document basis. On an absolute basis, total deep Web quality exceeds that of the surface Web by thousands of times. Total number of deep Web sites likely exceeds 200,000 today and is growing rapidly.[39] Content on the deep Web has meaning and importance for every information seeker and market. More than 95% of deep Web information is publicly available without restriction. The deep Web also appears to be the fastest growing information component of the Web.

    We all know the potentiality of the e-commerce in ordinary web and its impressive growth in last couple of years, well now imagine the Deep Web market that is more that 500 times bigger and where there is no legal limits on the odds to sell. We are facing with amazing business controlled by ciber criminal organizations.
    Speaking of dark market we cannot avoid to mention Silk Road web site, an online marketplace located in the Deep Web, the majority of its products are derived from illegal activities. Of course it’s not the only one, many other markets are managed to address specify products, believe me, many of them are terrifying.

    The figure below displays the distribution of deep Web sites by type of content.

    Figure 6. Distribution of Deep Web Sites by Content Type

    More than half of all deep Web sites feature topical databases. Topical databases plus large internal site documents and archived publications make up nearly 80% of all deep Web sites. Purchase-transaction sites — including true shopping sites with auctions and classifieds — account for another 10% or so of sites. The other eight categories collectively account for the remaining 10% or so of sites.

    Most transactions on the Deep Web accept BitCoinsystem for payments allowing the purchase of any kind of products preserving the anonymity of the transaction, encouraging the development of trade in respect to any kind of illegal activities. We are being faced with an autonomous system that allows the exercise of criminal activities while ensuring the anonymity of transactions and the inability to track down the criminals.

     The most important findings from the analysis of the deep Web are that there is massive and meaningful content not discoverable with conventional search technology and that there is a nearly uniform lack of awareness that this critical content even exists.

    I will provide more information regarding this topic in the near future. In the meantime, find below a summary and some key facts regarding the Deep Web:

    · Public information on the deep Web is currently 400 to 550 times larger than the commonly defined World Wide Web.

    · The deep Web contains 7,500 terabytes of information compared to 19 terabytes of information in the surface Web.

    · The deep Web contains nearly 550 billion individual documents compared to the 1 billion of the surface Web.

    · More than 200,000 deep Web sites presently exist.

    · Sixty of the largest deep-Web sites collectively contain about 750 terabytes of information — sufficient by themselves to exceed the size of the surface Web forty times.

    · The deep Web is the largest growing category of new information on the Internet.

    · Deep Web sites tend to be narrower, with deeper content, than conventional surface sites.

    · Total quality content of the deep Web is 1,000 to 2,000 times greater than that of the surface Web.

    · Deep Web content is highly relevant to every information need, market, and domain.

    · More than half of the deep Web content resides in topic-specific databases.

    · A full ninety-five per cent of the deep Web is publicly accessible information — not subject to fees or subscriptions.

  • Anybody looking to strike up a heated debate among technologists need only ask, "Is the cloud private?" There is an old adage that, if you have to ask, the answer is "no." However, one can expect all kinds of responses to that simple question, from the technically savvy to the academic to the emotional. The simple answer: yes and no.

    The cloud privacy debate hinges in part on the question of whether data is more private when it is stored locally or encrypted remotely. One could argue that, when an enterprise turns over its computing resources to a service provider, it will get all of the benefits of a full-time IT staff, multiple connections to the Internet, and 24/7/365 network management. The skeptic, however, will say that putting all of one's security eggs in one basket (in this case, the cloud provider) is problematic.

    If your company opts for cloud services, remember that the cloud provider's first and most important responsibility is to make a profit and stay in business. As a result, a customer on a multitenant server could find its security impacted by others collocated on that server, though it remains within the security agreement it signed with the provider.

    For example, let's say a company co-hosts its servers on the same physical box as 10 other companies at a service provider. Even though the provider might be doing everything right technically and legally, one of those other companies on the co-hosted server might be doing something illegal. If law enforcement issues a subpoena to obtain all that company's data, it could take your data, as well -- without you knowing about it in advance. In fact, some subpoenas can state specifically that the provider is not allowed to tell the target or others impacted by the investigation that their data is being reviewed. This could affect your business operations if the virtual or physical server on which your company's data is hosted is taken down by law enforcement.

    There are other privacy vulnerabilities. Let's assume the service provider is using virtualization to separate each of the companies on the server. If one of the companies on that server were to go rogue and breach the hypervisor, it could gain access to the root and, therefore, all of the virtual servers connected to that hypervisor. The attacker could gain full access to all virtual machines on the server (including yours), steal private data, and be gone before the hosting provider realizes the breach.

    An IT manager can avoid these vulnerabilities by simply hosting data on a dedicated server. But that will not take advantage of the benefits of the cloud, including the ability to move your data quickly to various servers for load balancing, disaster recovery, and more.

    This does not mean that multitenant or cloud computing is not safe. Rather, good security practices are always necessary, regardless of where data is stored. For many companies, a cloud service provider can offer a higher level of security than a company could offer itself. A risk analysis that compares housing data locally or in the cloud will answer the basic question of whether to employ a service provider. If you're better off with a provider, bring in a strong negotiator when you draft the contract to ensure that the provider keeps your interests, and not its own, up front.

    Remember that if there is a breach, regardless of whether you use a cloud provider or host data yourself, your customers will blame you for data loss. Your reputation is at stake. Since your ability to secure the cloud ends at the perimeter of your network, make sure your SLA and security agreements address technology over which you have no control. And by all means, make sure everything in the cloud is encrypted securely. There is no excuse for losing unencrypted data to a breach, locally or remotely.

  • Microsoft today revealed a new look for its corporate logo, marking the first time in 25 years the company has changed its image and the fifth time overall. The new logo features the name "Microsoft" in the Segoe font — a proprietary font used in the firm’s products and marketing for several years -- alongside a multicolored Windows symbol intended to "signal the heritage but also signal the future.”

    Microsoft is preparing to launch a range of products this fall, including Windows 8 and a new Surface tablet running the OS, as well as Windows Phone 8. The software giant's new logo reflects a change in its products’ look and feel which relies heavily on a tile-based UI formerly known as Metro -- now it’s just called Windows 8 Style. It also arrives just months after introducing a new single-colored Windows 8 logo.

    The new corporate image will begin its rollout today, appearing on Microsoft.com and the company’s Twitter and Facebook accounts, followed by new TV commercials airing over the news few weeks.
    Speaking with Seattle Times, Microsoft's general manager of brand strategy Jeff Hansen also commented on the company’s past logos and their influences. The first logo, used from 1975 to 1979, featured a disco-y typeface with the words Micro on one line and Soft below it, reflecting how co-founders Bill Gates and Paul Allen came up with the original company name using the words "microcomputers" and "software."

    The second logo has briefly used between 1980 and 1981 and its jagged edges, strong diagonals typography reflected the computer and video-game culture of the time. The third logo was used from 1982 to 1986 introduced a stylized letter "o" with lines through it, while some tweaks in 1987 resulted in the logo most people are familiar with, featuring a slice in the first “o” and a connection between the letters "f" and “t”.

  • By the time you see this headline, the first question that pops into your mind is: Could a Car get a Computer Virus? Well the answer to that question is a capital YES!

    In the past, car viruses were rare because one of the only ways to infect a vehicle was by a mechanic and via the computer or software he used to diagnose problems with the car.

    More than 100 Texas drivers could have been excused for thinking that they had really horrendous luck or -- at least for the more superstitious among them -- that their vehicles were possessed by an evil spirit. That's because in 2010, more than 100 customers of a dealership called Texas Auto Center found their efforts to start their cars fruitless, and even worse, their car alarms blared ceaselessly, stopped only when the batteries were removed from the vehicles [source: Shaer].
    What seemed to some to be a rash of coincidence and mechanical failure turned out to be the work of a disgruntled employee-turned-hacker. Omar Ramos-Lopez, who had been laid off by the Texas Auto Center, decided to exact some revenge on his former Austin, Texas employer by hacking into the company's Web-based vehicle immobilization system, typically used to disable the cars of folks who had stopped making mandatory payments [source: Shaer]. Besides creating plenty of mayhem and generating a flood of angry customer complaints, Ramos-Lopez, who was eventually arrested, highlighted some of the vulnerabilities of our increasingly computer-dependent vehicles from a skilled and motivated hacker.
    Although Ramos-Lopez's attack generated a lot of attention, his hacking was fairly tame compared to the possibilities exposed by analysts at a number of different universities. Indeed, in 2010, researchers from the University of Washington and the University of California at San Diego proved that they could hack into the computer systems that control vehicles and remotely have power over everything from the brakes to the heat to the radio [source: Clayton]. Researchers from Rutgers University and the University of South Carolina also demonstrated the possibility of hijacking the wireless signals sent out by a car's tire pressure monitoring system, enabling hackers to monitor the movements of a vehicle.
    Taken together, these events show that cars are increasingly vulnerable to the sort of viruses (also known as malware) introduced by hackers that routinely bedevil, frustrate and harm PC users everywhere. Obviously, this has real implications for drivers, although the researchers themselves point out that hackers have not yet victimized many people. But the ramifications are clear.
    "If your car is infected, then anything that the infected computer is responsible for is infected. So, if the computer controls the windows and locks, then the virus or malicious code can control the windows and locks," says Damon Petraglia, who is director of forensic and information security services at Chartstone Consulting and has trained law enforcement officers in computer forensics. "Same goes for steering and braking."

    As high-technology continues to creep into horseless carriages everywhere, there's one thing we can all count on: abuse of that technology. According to Reuters, Intel's "top hackers" are on the case though, poring over the software which powers the fanciest of automobile technology in hopes of discovering (and dashing) various bugs and exploits.
    Except under the most specific of scenarios, the damaging results from an attack against an unsuspecting user's personal computer are often limited. Hackers may be able to cripple a computer, invade a user's privacy or even steal someone's identity. Causing personal injury or death though, is typically out of the question. However, with an increasing amount of technology and software proliferating modern vehicles, this could all change. 
    "You can definitely kill people," asserts John Bumgarner, CTO of a non-profit which calls itself the U.S. Cyber Consequences Unit.
    As outlined in the following publication, Experimental Security Analysis of a Modern Automobile (pdf), researchers have already shown that a clever virus is capable of releasing or engaging brakes on a whim, even at high speeds. Such harrowing maneuvers could potentially extinguish the lives of both its occupants and others involved in the resulting accident. On certain vehicles, researchers were also able to lock and unlock doors, start and disable the engine and toggle the headlights off and on.
    Ford spokesman Alan Hall assures us, "Ford is taking the threat very seriously and investing in security solutions that are built into the product from the outset". Ford has been an industry leader in adopting advanced automotive technologies.
    Thus far, there have been no reported incidents of injury or death caused by automobile hacking. That's according to SAE International, a major standards committee for automotive and aerospace industries.
    When asked by Reuters whether or not there had been any such reports, most manufacturers declined to comment. However, McAfee executive Bruce Snell claims that automakers are still very concerned about it. Snell admits, "I don't think people need to panic now. But the future is really scary." McAfee, which is now owned by Intel, is the division of Intel investigating automobile cyber security.

    We can only hope and pray that solution arrives early enough before this viruses are being released en-masse which could endanger the lives of innocent car owners.

  • 01. Introduction

    Windows 8 vs. Windows 7 Performance

    Unless you have been living under a rock, there is a good chance you have caught wind of Microsoft’s latest operating system. Those eager to see what the new OS is all about had their first chance to take a peek back in February when Microsoft released the Windows 8 Consumer Preview.

    More than a million downloads took place within the first day of the preview's release, but users were in for a shock as major changes awaited them. By far the most controversial has been the replacement of the Start menu for the new Start screen, and inherently, Microsoft's decision of doing away with the Start button on desktop mode.

    For the first time since Windows 95 the Start button is no longer a centerpiece of the operating system, in fact it's gone for good.

    On the final version of Windows 8, clicking the bottom-left corner of the screen -- where the Start button would normally be located -- launches the Metro interface (or whatever it is they are calling it now). The new tile-based interface is radically different from anything used on a Windows desktop and resembles what we've successfully seen working on the latest iterations of Windows Phone.
    However, many users seem to be struggling to get their head around it. Personally, in spite of using Windows 8 for several months, I'm still undecided if I like the new interface or not. It certainly takes some time getting used to and for that reason I'm not jumping to conclusions just yet.

    My opinion aside, there are countless users that have already shunned the new interface and many of them made their thoughts heard in our recent editorial "Windows 8: Why the Start Menu's Absence is Irrelevant". Yet, while everyone loves to try and remind Microsoft about how much of a flop some previous operating systems such as ME and Vista were, and that Windows 8 will be no better, we believe the new operating system still has a lot to offer.

    Microsoft's PR machine has been hard at work over the past few months, trying to explain the numerous improvements Windows 8 has received on the backend. The good news is that it shows.
    Coming from the two previews and now the final release of Windows 8, the OS seems smoother than Windows 7. It has been well documented that Windows 8 starts up and shuts down faster, so that wasn’t much of a surprise. Maybe it's the inevitability of bloating an OS installation that is a couple of years old (in the case of Windows 7), but there's this sense of when you move from a hard drive to an SSD, things just appear slightly quicker. This was surprising as I had not expected to notice much of a difference for general usage.

    Of course, this is merely an informal observation and we are here to back up those impressions with hard numbers (read: lots of benchmarks in the coming pages).

    Back when Vista first arrived I remember comparing how it performed to XP and being extremely disappointed with the results. Vista was generally rough around the edges and that included drivers, so gaming and productivity applications were more often than not slower in the new OS.
    For comparing Windows 7 and Windows 8 we will measure and test the performance of various aspects of the operating system including: boot up and shutdown times, file copying, encoding, browsing, gaming and some synthetic benchmarks. Without further ado...

    02. Benchmarks: Boot Up, PCMark, Browser, Encoding

    The following benchmarks were conducted using our high-end test system which features the Intel Core i7-3960X processor, 16GB of DDR3-1866 memory and a GeForce GTX 670 graphics card, all on the new Asrock X79 Extreme11 motherboard. The primary drive used was the Samsung Spinpoint F1 1TB, while the Kingston SSDNow V+ 200 256GB SSD was used for the AS SSD Benchmark and Windows Explorer tests.
    Using the Samsung Spinpoint F1 1TB HDD we saw OS boot up times reduced by 33%. Going from 27 seconds with Windows 7 to just 18 seconds with Windows 8 is obviously a significant improvement and it means SSD users will be able to load Windows 8 in a matter of a few seconds.
    A similar improvement is seen when measuring shutdown time. Windows 8 took 8 seconds versus the 12 seconds it took an identically configured Windows 7 system.
    We tested wake-up from sleep times using a standard hard disk drive. Windows 8 shows a marked improvement here as well, however we still thought 10 seconds was too long. We then tested Windows 8 using our SSD and the exact same 10 second window was repeated. With <5 second wake up from sleep times being touted by today's Windows 7 laptops, we imagine the operating system detects when you are using a laptop and that there are special power saving features on a mobile system that make a difference.
    3Dmark 11 is used primarily to measure 3D graphics performance, meaning graphics card drivers play a vital role here. Still the performance was very similar on both operating systems, though the more mature Windows 7 was slightly faster.
    Multimedia performance is said to be another of the strengths of Windows 8, and as you can see when testing with PCmark 7, it was 9% faster than its predecessor.
    Using the Mozilla Kraken benchmark we compared the performance of Windows 7 using IE9 and Windows 8 with IE10. As you can see the desktop version of the IE10 browsers on Windows 8 delivered virtually the same performance as IE9 on Windows 7. The Metro version of IE10 was 3% faster, reducing the completion time to just 3926ms.
    Update: We've added benchmarks for the latest versions of Firefox and Chrome on both operating systems. Besides beating IE to the punch on these synthetic benchmarks, the take away here is that both browsers tend to perform slightly better under Windows 8.
    Google V8 is another browser test we used. In this case it gives a score, so the larger the number the better. Again we see that the desktop version of the IE10 browser in Windows 8 is very similar to IE9 from Windows 7. Though this time the Metro version is actually much slower, lagging behind by a 21% margin.
    Chrome and Firefox take a huge lead compared to IE, and on both counts the browsers behave better running on Windows 8.
    PCmark7 showed us that Windows 8 was faster than Windows 7 in multimedia type tests and this has been confirmed by the x264 HD Benchmark 5.0 which favored Microsoft’s latest operating system by a 6% margin in the first pass test.
    Although the margin was very small when testing with HandBrake, we still found Windows 8 to be 1.5% faster than Windows 7.

    03. Benchmarks: Excel, File Copy, Gaming

    Comparing Windows 8 armed with the new Office 2013 suite we found that it was 10% faster when running our Excel MonteCarlo test against Windows 7 using Office 2010. Even when comparing apples to apples, with both operating systems running Excel 2010, Windows 8 is more efficient using the CPU cycles to its benefit on our MonteCarlo simulation.
    The AS SSD Benchmark was used to measure the performance of the Kingston SSDNow V+ 200 256GB SSD. Here we see that Windows 8 and Windows 7 delivered virtually the same sequential read and write performance.
    Despite delivering similar sequential read/write performance we found in the ISO benchmark that Windows 7 was 9% faster based on an average of three runs.
    Windows 8 features a new Explorer interface for transferring files, which provides more accurate data on transfer speeds and estimated time of completion. It also stacks multiple transfer windows together. The UI is awesome, but on the performance side of things there is little difference when transferring multiple large files together or individually. Windows 8 and Windows 7 deliver similar performance in both situations.
    When transferring thousands of smaller files we also found that Windows 7 and Windows 8 offer the same performance.
    Finishing up we looked at gaming performance using Just Cause 2, Hard Reset and Battlefield 3. Similar to the previous 3DMark test, this relies on graphics drivers more than anything else. As you can see both operating systems provide similar performance with a very slight edge to Windows 7's advantage.

    04.  Benchmarks: Excel, File Copy, Gaming

    Comparing Windows 8 armed with the new Office 2013 suite we found that it was 10% faster when running our Excel MonteCarlo test against Windows 7 using Office 2010. Even when comparing apples to apples, with both operating systems running Excel 2010, Windows 8 is more efficient using the CPU cycles to its benefit on our MonteCarlo simulation.
    The AS SSD Benchmark was used to measure the performance of the Kingston SSDNow V+ 200 256GB SSD. Here we see that Windows 8 and Windows 7 delivered virtually the same sequential read and write performance.
    Despite delivering similar sequential read/write performance we found in the ISO benchmark that Windows 7 was 9% faster based on an average of three runs.
    Windows 8 features a new Explorer interface for transferring files, which provides more accurate data on transfer speeds and estimated time of completion. It also stacks multiple transfer windows together. The UI is awesome, but on the performance side of things there is little difference when transferring multiple large files together or individually. Windows 8 and Windows 7 deliver similar performance in both situations.
    When transferring thousands of smaller files we also found that Windows 7 and Windows 8 offer the same performance.
    Finishing up we looked at gaming performance using Just Cause 2, Hard Reset and Battlefield 3. Similar to the previous 3DMark test, this relies on graphics drivers more than anything else. As you can see both operating systems provide similar performance with a very slight edge to Windows 7's advantage.

    05.  Faster, Slower, Better?

    It's often been the case with new Windows OS releases that it takes some time before performance is up to par or above the level of its predecessor. Possibly the most extreme example I can recall was the move from Windows XP to Windows Vista, though that was partly due to immature drivers on the all-new platform, and partly to do with the fact that Vista was a resource hog.

    Microsoft seemed to hit full stride with Windows 7, developing a fast and efficient operating system. Thankfully it seems Windows 8 continues to build on that pattern as we found it to be on par with and ocassionally faster than 7.

    The improvements that have been made to startup and shutdown times are self-evident, and no doubt a major focus on the new OS' development as this will particularly benefit laptop and tablet users. Another notable improvement was seen in multimedia performance. This was first observed when running PCMark 7 and later confirmed when we ran x264 HD Benchmark 5.0 and our HandBrake encoding test.

    Most of the other tests saw little to no difference between the two operating systems. This was especially true for the gaming benchmarks, but most surprising on the IE tests which we figured would have shown a big advantage for IE10, but not so.

    Both AMD and Nvidia seem to be on top of their drivers for Windows 8 from day zero, as we were able to achieve the same level of performance in Windows 8 as we did in Windows 7 using the GeForce GTX 670 and the Radeon HD 6570.
    From a performance standpoint Windows 8 appears to offer a solid foundation from the get-go. Although there are only a few select areas where it is faster than Windows 7, we are pleased that it's able to match it everywhere else.

    Looking beyond benchmarks, Windows 8 appears more polished than Windows 7, even if you plan to live on the desktop and aren't too fond of the Start screen, general usage is smoother and appears to be faster on Windows 8, which I found most noticeable on our somewhat underpowered Athlon II X4 system. If anything, it's a great start, now the Metro/Modern style will have to prove itself as a cross-platform OS that marries desktop, laptop and tablet PCs.
  • A comprehensive article that touches on cyber-crime laws, the limits to overcoming cyber-crime and the opportunity in the collective security of the human race.

    With the advent of the computer age, legislatures have been struggling to redefine the law to fit crimes perpetuated by computer criminals. This crime is amongst the newest and most constantly evolving areas of the law in many jurisdictions. The rise of technology and online communication has not only produced a dramatic increase in the incidence of criminal activity, it has also resulted in the emergence of what appears to be some new varieties of criminal activity. Both the increase in the incidence of criminal activity and the possible emergence of new varieties of criminal activity pose challenges for legal systems, as well as for law enforcement.

    The news said that another person had their identity stolen. It happened again. You might even know of someone that had it happen to them. We often hear of percentages - and they are surprisingly high. Enforcement is taking place, but we have to wonder if computer crime laws are really having any effect against cyber crime.

    Defining Cyber Crime

    Computer crime refers to any crime that involves a computer and a network. The computer may have been used in the commission of a crime, or it may be the target. Net-crime refers to criminal exploitation of the Internet. Cyber-crimes are defined as: "Offenses that are committed against individuals or groups of individuals with a criminal motive to intentionally harm the reputation of the victim or cause physical or mental harm to the victim directly or indirectly, using modern telecommunication networks such as Internet (Chat rooms, emails, notice boards and groups) and mobile phones (SMS/MMS)"

    Hacking has a rather simple definition to it. Basically it is defined as the unauthorized use of a computer - especially when it involves attempting to circumvent the security measures of that computer, or of a network.

    Beyond this, there are two basic types of hacking. Some only hack because they want to see if they can do it - it is a challenge to them. For others, however, it becomes an attack, and they use their unauthorized access for destructive purposes. Hacking occurs at all levels and at all times - by someone, for some reason. It may be a teen doing it to gain peer recognition, or, a thief, a corporate spy, or one nation against another.

    Effectiveness of Computer Hacking Laws

    Like any other law, the effectiveness must be determined by its deterrence. While there will always be those that want to see if they can do it, and get away with it (any crime), there are always the many more who may not do something if they are aware of its unlawfulness - and possible imprisonment.

    In the early 1990's, when hacker efforts stopped AT&T communications altogether, the U.S. Government launched its program to go after the hackers. This was further stepped up when government reports (by the GAO) indicate that there have been more than 250,000 attempts to hack into the Defense Department computers. First there were the laws - now came the bite behind it. One of the effects of computer hacking brought about focused efforts to catch them and punish them by law.

    Then, more recently, the U.S. Justice Department reveals that the National Infrastructure Protection Center has been created in order to protect our major communications, transportation and technology from the attack of hackers. Controlling teens and hackers has become the focus of many governmental groups to stop this maliciousness against individuals, organizations, and nations.

    One of the most famous for his computer crimes hacking was Kevin Mitnick, who was tracked by computer, and caught in 1995. He served a prison sentence of about five years. Others have likewise been caught. Another case is that of Vasily Gorshkov from Russia, who was 26 years old when convicted in 2001. He was found guilty of conspiracy and computer crime.

    Other individuals have also been found guilty and sentenced -and many others remain on trial. If you are one who pays much attention to the news, then you know that every now and then, you will hear of another hacker that has been caught, or a group of hackers that have been arrested because of their criminal activities. The interesting thing is that it is often others who had learned hacking techniques, and are now using them to catch other criminal hackers.

    Another criminal hacker, who called himself Tasmania, made big news when he fled Spain on various charges of stealing into bank accounts online, and banks, and went to Argentina. There he went into operation again. He was quickly tracked to Argentina, and the governments of Spain and Argentina went after him with surveillance, first. Before long, he was arrested, along with 15 other men, and was then extradited back to Spain (in 2006) where he could face up to 40 years in prison.

    The simple truth is, these criminal hackers/cyber attackers get smarter everyday and they do everything possible to cover their tracks, making it difficult to find or locate them. We can’t help but wonder if this computer crime laws have any impact on the rate of computer crimes being committed day after day. We wonder if the existing laws in place are adequate to combat cyber crime and consequently if amendments need to be put in place.

    Today, criminal organizations are very active in the development and diffusion of malware that can be used to execute complex fraud with minimal risks to the perpetrators. Criminal gangs, traditionally active in areas such as human or drug trafficking, have discovered that cyber-crime is a lucrative business with much lower risks of being legally pursued or put in prison. Unethical programmers are profitably servicing that growing market. Because today’s ICT ecosystem was not built for security, it is easy for attackers to take over third party computers, and extremely difficult to track attacks back to their source. Attacks can be mounted from any country and hop through an arbitrary number of compromised computers in different countries before the attack reaches its target a few milliseconds later. This complicates attribution and international prosecution.


    1.  THE COMPUTER MISUSE ACT OF 1990: A law in the UK that makes illegal certain activities, such as hacking into other people’s systems, misusing software, or helping a person to gain access to protected files of someone else's computer.

    Sections 1-3 of the Act introduced three criminal offences:

    a) Unauthorised access to computer material, punishable by 6 months' imprisonment or a fine "not exceeding level 5 on the standard scale" (currently £5000);

    b) unauthorised access with intent to commit or facilitate commission of further offences, punishable by 6 months/maximum fine on summary conviction or 5 years/fine on indictment;

    c) unauthorised modification of computer material, subject to the same sentences as section 2 offences.

    2. COMPUTER FRAUD AND ABUSE ACT: A law passed by the United States Congress in 1986, intended to reduce cracking of computer systems and to address federal computer-related offenses. The Act (codified as 18 U.S.C. § 1030) governs cases with a compelling federal interest, where computers of the federal government or certain financial institutions are involved, where the crime itself is interstate in nature, or where computers are used in interstate and foreign commerce.
    It was amended in 1989, 1994, 1996, in 2001 by the USA PATRIOT Act, 2002, and in 2008 by the Identity Theft Enforcement and Restitution Act. Subsection (b) of the Act punishes anyone who not only commits or attempts to commit an offense under the Act, but also those who conspire to do so.

    3. ELECTRONIC COMMUNICATIONS PRIVACY ACT: Passed in 1986, Electronic Communications Privacy Act (ECPA) was an amendment to the federal wiretap law, the Act made it illegal to intercept stored or transmitted electronic communication without authorization.11 ECPA set out the provisions for access, use, disclosure, interception and privacy protections of electronic communications. Which is defined as “any transfer of signs, signals, writing, images, sounds, data, or intelligence of any nature transmitted in whole or in part by a wire, radio, electromagnetic, photo electronic or photo optical system that affects interstate or foreign commerce." The Act prohibits illegal access and certain disclosures of communication contents. In addition, ECPA prevents government entities from requiring disclosure of electronic communications by a provider such as an ISP without first going through a proper legal procedure.

    4. CYBER SECURITY ENHANCEMENT ACT: Cyber Security Enhancement Act (CSEA) was passed together with the Homeland Security Act in 2002, it granted sweeping powers to the law enforcement organizations and increased penalties that were set out in the Computer Fraud and Abuse Act.

    The Act also authorizes harsher sentences for individuals who knowingly or recklessly commit a computer crime that results in death or serious bodily injury.
    The sentences can range from 20 years to life. In addition CSEA increases penalties for first time interceptors of cellular phone traffic, thus removing a safety measure enjoyed by radio enthusiasts.

    5.    Other Laws Used to Prosecute Computer Crimes

    In addition to laws specifically tailored to deal with computer crimes, traditional laws can also be used to prosecute crimes involving computers. For example, the Economic Espionage Act (EEA) was passed in 1996 and was created in order to put a stop to trade secret misappropriation. 15 EEA makes it a crime to knowingly commit an offense that benefits a foreign government or a foreign agent. The Act also contains provisions that make it a crime to knowingly steal trade secrets or attempt to do so with the intent of benefiting someone other than the owner of the trade secrets. EEA defines stealing of trade secrets as copying, duplicating, sketching, drawing, photographing, downloading, uploading, altering, destroying, photocopying, replicating, transmitting, delivering, sending, mailing, communicating, or conveying trade secrets without authorization. The Act, while not specifically.

    While we can’t measure all the computer crime laws here, different countries have different laws laid down to fight cybercrime and to prosecute the guilty ones.


    We’ve discovered that internationally, both Governmental and non-state actors engage in cybercrimes, including espionage, financial theft, and other cross-border crimes. Activity crossing international borders and involving the interests of at least one nation-state is sometimes referred to as cyber warfare. The international legal system is attempting to hold actors accountable for their actions through the International Criminal Court.

    And this leads us to discussing invasive monitoring by governments. Wikileaks claims that mass interception of entire populations is not only a reality; it is a secret new industry spanning 25 countries. Wikileaks has published 287 files that describe commercial malware products from 160 companies (http://wikileaks.org/the-spyfiles.html). These files include confidential brochures and slide presentations these companies use to market intrusive surveillance tools to governments and law enforcement agencies. This industry is, in practice, unregulated. Intelligence agencies, military forces and police authorities are able to silently, and en masse, secretly intercept calls and take over computers without the help or knowledge of the telecommunication providers. Users’ physical location can be tracked if they are carrying a mobile phone, even if it is only on standby (think RFID).

    To get a glimpse of the potential market size, the U.S government is required by law to reveal the total amount of money spent spying on other nations, terrorists and other groups. In 2010, the United States spent $80 billion on spying activities. According to the Office of the Director of National Intelligence, $53.1 billion of that was spent on non-military intelligence programmes. Approximately 100,000 people work on national intelligence. These figures do not include DARPA’s “Plan X” which seeks to identify and track the vulnerabilities in tens of billions of computers connected to the Internet, so they can be exploited.

    It is increasingly common for governments to use monitoring tools, viruses and Trojans to infect computers and attack civilians, dissidents, opponents and political oppositions. The purpose is to track the victim’s operation on the web, gather information about their activities and the identity of collaborators. In some cases, this can lead to those targeted being neutralized and even ruthlessly suppressed.

    According to F-Secure “News from the Lab” blog, during the Syrian repression the government discovered that dissidents were using programmes like SkypeTM to communicate. After the arrest of a few dissidents, the government used their Skype accounts to spread a malware programme called “Xtreme RAT” hidden in a file called “MACAddressChanger.exe” to others activists who downloaded and executed the malware. The dissidents trusted the MACAddressChanger programme because other files with that name had been successfully used in the past to elude the monitoring system of the government. The Xtreme Rat malware falls into the “Remote Access Tool” category. The full version can easily be bought online for €100. The IP address of the command and control server used in those attacks belonged to the Syrian Arab Republic — STE (Syrian Telecommunications Establishment).

    In the Trend Micro “Malware Blog”, experts at Trend Micro found that the Syrian government was also using the DarkComet malware to infect computers of the opposition movement. The malware steals documents from victims. It seems that it was also spread through Skype chat. Once executed, the malware tries to contact the command and control (C&C) server to transfer the stolen information and receive further instructions. It has been observed, in this example that the C&C server is located in Syria and the range of IP addresses are under the control of the Government of Syria.

    What the above partially illustrates is the very real conflict of interest in organizations and governments responsible for securing our digital world.

    African countries have been criticized for dealing inadequately with cybercrime as their law enforcement agencies are inadequately equipped in terms of personnel, intelligence and infrastructure, and the private sector is also lagging behind in curbing cybercrime. African countries are pre-occupied with attending to pressing issues such as poverty, the AIDS crisis, the fuel crisis, political instability, ethnic instability and traditional crimes such as murder, rape and theft, with the result that the fight against cybercrime is lagging behind. It is submitted that international mutual legal and technical assistance should be rendered to African countries by corporate and individual entities to effectively combat cybercrime in Africa.


    While there is no silver bullet for dealing with cyber crime, it doesn’t mean that we are completely helpless against it. The legal system is becoming more tech savvy and many law enforcement departments now have cyber crime units created specifically to deal with computer related crimes, and of course we now have laws that are specifically designed for computer related crime. While the existing laws are not perfect, and no law is, they are nonetheless a step in the right direction toward making the Internet a safer place for business, research and just casual use. As our reliance on computers and the Internet continues to grow, the importance of the laws that protect us from the cyber-criminals will continue to grow as well.

    Efforts at combating cyber-crimes will all continue to produce futile results as long as governments and the OPS (organized public sector) are insincere in their drive towards protecting the sanity of the internet.
    Whatever efforts we make, we shouldn't ignore the fact that an enlightened citizenry is the key to safety of the internet but then, the battle of sovereign supremacy will continue to undermine our collective safety online.
    It behooves every one of us on the globe to look inward and think ahead that our collective safety is greater than the greed and ferocity of hegemonist both in the private sector and supremacist in government.


    “2003 CSI/FBI Computer Crime and Security Survey”.







    Computer Misuse Act