• Search Technologies have been using big data processing and machine learning to improve search results for a while now, but it is only recently that we have come to understand that these same techniques can be used for personalization. To efficiently search huge volumes of data, these search engines use diverse smart techniques and algorithms to deliver fairly accurate user interest-based content. 

    Google, for example, is known to keep data of the past searches, device usage, location, demographics and more of users to predict and provide quick and easy access to interest-based results.  The availability of this vast data is why we can search for ‘McDonald’s’’ and get the five locations closest to us, instead of the ones in faraway China. 
    In offering personalized content and experiences geared toward users’ individual interests, machine learning algorithms are injected into almost every platform to predict user’s intentions based on what the platforms have learned from the user’s behavioral and historical data, and these recommender systems are assumed to present to the user tailored information while reducing news diversity, thus leading to partial information blindness (i.e., filter bubbles).

    Wikipedia describes the filter bubble as a state of intellectual isolation which can result from personalized searches when a website algorithm selectively guesses what information a user would like to see, based on the information the relevant search engine has picked up about the user. In order words, a user’s filter bubble is his personal, unique universe of information which the user has online, and what’s in his filter bubble depends on who the user is, where he lives, what he does and what sites he visits.

    These filter bubbles limits users from harnessing the full potential of the internet, contrary to the original intention - which is the unrestricted access to a vast amount of information. An idea which prevailed in the last 10-15 years, but is fast declining in today’s world.

    Today’s Internet giants — Google, Facebook, Yahoo and Microsoft, and other technology companies see the remarkable rise of available information as an opportunity. If they can provide services that sift through the data and supply us with the most personally relevant and appealing results, they’ll get the most users and the most ad views. As a result, they’re racing to offer personalized filters which show users the internet they think we want to see. These filters, in effect, control and limit the information that reaches our screens.

    By now, everyone is familiar with sponsored ads which follow us around online, based on our recent clicks on commercial websites, this is as a result of the increasing and nearly invisible storage of our personal information, with and sometimes without our consent.  For instance two users who each individually search on Google for “Nigeria” may get significantly different results, based on their past clicks; aggregators like Yahoo News and Google News make adjustments to their home pages for each individual visitor, and this technology is beginning to make inroads on the websites of newspapers like The Washington Post and The New York Times.

    As aptly put by Facebook’s CEO, Mark Zuckerberg while emphasizing the importance of news feed in Facebook and how they need to be customized from user to user: “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa” – one of the probable baselines for Facebook tailored news feed. At Facebook, “relevance” is virtually the sole criterion that determines what users see. Focusing on the most personally relevant news — the squirrel — is a great business strategy, but it leaves us staring at our front yard instead of reading about suffering, genocide, and revolution.

    In a 2010 interview with the Wall Street Journal, former CEO of Google, Eric Schmidt said, “It will be very hard for people to watch or consume something that has not in some sense been tailored for them” referencing the power of individual targeting.

    All of this is fairly harmless when information about consumer products is filtered into and out of our personal universe. But when personalization affects not just what we buy but how we think, different issues arise. We get trapped in a filter bubble and are not exposed to information that could challenge or broaden our worldview. Globalization and democracy depends on the citizen’s ability to engage with multiple viewpoints; the internet limits such engagement when it offers up only information that reflects only our already established point of view. While it’s sometimes convenient to see only what we want to see, it’s critical at other times that we see things that we don’t.

    What we are witnessing currently is the passing of the torch; from human gatekeepers to algorithms. The algorithms do not possess the ethics of human editors, hence, if algorithms are going to curate the world for us by deciding what we see and what we don’t see, then we need to make sure they are not just keyed to relevance, but that they also show us the full picture. They need to show us not just information (based on previously gathered data), or Kim K’s take on an issue (because it’s generally sought after), but also what’s relevant to our search in faraway Timbuktu.

    In today’s world of growing data, personalization is indispensable for efficient search and relevancy. It has succeeded in creating the structured web content and unequivocally brought significant benefits to the users which are of great value provided the user in question consents, failing which it not only limits but also constitutes a breach of relevant regulations such as the General Data Protection Regulation which went into effect in May 2018.

    We are living in an exciting, innovative and slightly scary world. I believe the majority of the benefits of personalized search driven by machine learning and pattern matching tend to outweigh the risks, but we must be ever-diligent to ensure we don't develop a permanent privacy and perspective blind spot.




    References: 

    “Beware online "filter bubbles" – Eli Pariser: TED Talk” – (2010)


    “The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think” – Eli Pariser - 2011


    “Filter Bubble” – Wikipedia 


    “When the Internet think It Knows You” – NY Times – (2011, May 23)

    “Google chief warns on social networking dangers” – The Guardian – (2010)

    “How can recommender systems incorporate diversity and break filter bubbles?” – Quora – (2016)





  • A new ransomware campaign targeting large organisations in the US and around the world has made the attackers behind it over $640,000 in bitcoin in the space of just two weeks, and appears to be connected to Lazarus, the hacking group working out of North Korea.
    "From the exploitation phase through to the encryption process and up to the ransom demand itself, the carefully operated Ryuk campaign is targeting enterprises that are capable of paying a lot of money in order to get back on track," said security company Check Point.
    Ryuk ransomware first emerged in mid-August and in the space of just days, infected several organisations across the US, encrypting PCs and storage and data centres of victims and demanded huge Bitcoin ransoms -- one organisation is believed to have paid 50 Bitcoin (around $320,000) after falling victim to the attack.
    The new ransomware campaign has been detailed by the researchers at Check Point who describe the attacks as highly targeted to such an extent that the perpetrators are conducting tailored campaigns involving extensive network mapping, network compromise and credential stealing in order to reach the end goal of installing Ryuk and encrypting systems.
    It sounds similar to the techniques used by those behind SamSam ransomware, which has made its authors over $6 million, although there's not thought to be a link between these two particular malicious operations.
    Researchers have yet to determine how exactly the malicious payload is delivered, but users infected with Ryuk are met with one of two ransom notes.
    One is written almost politely, claiming that the perpetrators have found a "significant hole in the security systems of your company" which has led to all files being encrypted and that a Bitcoin ransom needs to be paid to retrieve the files.
    "Remember, we are not scammers" the message concludes -- before stating how all files will be destroyed if a payment isn't received within two weeks.

    One of the Ryuk ransom notes.
    A second note is blunter, simply stating that files have been encrypted and that a ransom must be paid in order to retrieve the files. In both cases, victims are given an email to contact and a bitcoin wallet address and are told that "no system is safe" from Ryuk.
    In both cases, ransoms have been between 15 and 35 Bitcoin ($224,000) with an additional half a bitcoin added for every day the victim doesn't give into the demands. 
    With such large ransoms being demanded, it appears that the attackers have researched their victims and have come to the conclusion that they'll be willing to pay to retrieve their data.
    "It is reasonable to assume the threat actors had some prior knowledge about their victims and their financial background. The fact that the targets are organizations and not individuals, might lead to a scenario where they have highly valuable data encrypted, which gives the perpetrators leverage to request higher amounts for its recovery.In such cases and in light of the underlying business impact, it becomes inevitable for the victims to pay the ransom.
    If victims pay up, the cryptocurrency is divided and transferred between multiple wallets as the attackers attempt to disguise where the funds came from.
    The ransomware hasn't been widely distributed, indicating that careful planning is behind attacks against specific organisations.
    But while the Ryuk campaign is new, researchers have found that the code is almost exactly the same as another form of ransomware -- Hermes. 
    Hermes ransomware first appeared late last year and has previously been connected to attacks conducted by the North Korean Lazarus hacking group, including when it was used as a diversion for a $60m cyber heist against the Far Eastern International Bank in Taiwan.
    Researchers inspecting Ryuk's encryption logic have found it resembles Hermes to such an extent that it still references Hermes within the code and that a number of rules and instructions are the same in both forms of malware, indicating identical source code.
    That's led Check Point to two possible conclusions: Ryuk is a case of North Korean hackers re-using code to conduct a new campaign, or that it is the work of another attacker which has somehow gained access to the Hermes source code.
    In either case, the specifically targeted attacks and the reconnaissance required in order to conduct them suggests that those behind Ryuk have the time and resources necessary to carry out the campaign. The current bounty of at least $640,000 suggests it's paying off and researchers warn that more attacks will come.
    "After succeeding with infecting and getting paid some $640,000, we believe that this is not the end of this campaign and that additional organizations are likely to fall victim to Ryuk," said researchers.






  • After listening to Eric Schmidt’s (Executive Chairman @ Google) speech at the TUM (Technical University of Munich) speaker series, where he spoke majorly about Artificial Intelligence (AI); how it begun, its impact on the present-day life and what to expect from AI in the future. I decided to put together this article to clarify the misconceptions between Artificial intelligence and Machine learning.

    Artificial Intelligence (AI) and Machine Learning (ML) are two interesting hot technology terms right now, and often seem to be used interchangeably. They are not quite the same thing, albeit a bit intertwined but the perception that they are, can lead to some confusion.

    Both terms are often heard when the topic is Big Data Analytics, and are now widespread thanks to the wonderful technologies creatively portrayed on television; sci-fi movies, tv series, documentaries, etc. and the machines that are gradually seeping their way into our lives, e.g. Voice-powered digital assistants (Google Now, Siri and Alexa), Self-driving cars, Navigation systems; Internet search engines and more.


    So, what then is Artificial Intelligence and what is Machine Learning?

    Artificial Intelligence is an area of computer science that emphasizes the creation of intelligent machines that are able to work and react like humans, as well as carrying out tasks in a way that we would consider “smart”.

    Whereas;

    Machine Language is a technology within the sphere of Artificial Intelligence based around the idea that we should just be able to give machines access to data and let them learn for themselves.  We can also say it is the science of getting computers to act without being explicitly programmed.

    Machine Language is a subset of Artificial Intelligence, i.e. All Machine Language counts as Artificial Intelligence but not all Artificial Intelligence counts as Machine Language.
    The pioneering technology within Machine Language mimics (to a very rudimentary level) the pattern recognition abilities of the human brain by processing thousands or even millions of data points.


    Now, let’s talk more AI


    The computer concept of AI has many facets. They range from Neural networks, expert systems, some languages (e.g. LISP and PROLOG), Planning systems (goal based reasoning), to real subfields such as Natural Language processing, Expert systems, Rule-based systems, Blackboard architectures, Image processing/recognition, Cybernetics(robotics), control systems (fuzzy logic) and, relations (semantic nets). Command systems, data/sensor fusion, Bayesian statistics, Discourse production, View-points and focus of attentions, speech understanding (e.g. Hearsay), and more.

    AI has thus become a broad field, involving many disciplines ranging from robotics to machine learning and deep learning.

    Talking of AI subsets; Artificial Intelligence is divided into three (3) main groups.

    1. Natural Language Processing (NLP) – The ability of machines to understand and interpret human language the way it is written or spoken. The objective of NLP is to make machines communicate effectively and as intelligent as human beings in an understanding language i.e. (English, German, French, Chinese, etc.).
    2. Knowledge Representation and Automated Reasoning– This is a field of AI that is dedicated to representing information about the world in a form that a computer system can utilize to solve complex tasks. Basically, this field allows the computer to store complex information, breakdown this information, and then use it to answer queries and infer new facts from existing data.
    3. Machine Learning -  Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning (supported by other facets of AI) has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without even knowing it.

    The Rise of Machine Learning:

    Two important breakthroughs led to the emergence of Machine Learning as the vehicle which is driving Artificial Intelligence development forward with the speed it currently has.

    One of these was the realization – credited to Arthur Samuel in 1959 – that rather than teaching computers everything they need to know about the world and how to carry out tasks, it might be possible to teach them to learn for themselves.

    The second, more recently, was the emergence of the internet, and the huge increase in the amount of digital information being generated, stored, and made available for analysis.

    Once these innovations were in place, engineers realized that rather than teaching computers and machines how to do everything, it would be far more efficient to code them to think like human beings, and then plug them into the internet to give them access to all of the information in the world.


    A Case of Branding?

    Artificial Intelligence – and in particular today Machine Learning certainly has a lot to offer. With its promise of automating mundane tasks as well as offering creative insight, industries in every sector from banking to healthcare and manufacturing are reaping the benefits. So, it’s important to bear in mind that AI and ML are something else … they are products which are being sold – consistently, and lucratively.

    Machine Learning has certainly been seized as an opportunity by marketers. After AI has been around for so long, it’s possible that it started to be seen as something that’s in some way “old hat” even before its potential has ever truly been achieved. There have been a few false starts along the road to the “AI revolution”, and the term Machine Learning certainly gives marketers something new, shiny and, importantly, firmly grounded in the here-and-now, to offer.

    The fact that we will eventually develop human-like AI has often been treated as something of an inevitability by technologists. Certainly, today we are closer than ever and we are moving towards that goal with increasing speed. Much of the exciting progress that we have seen in recent years is thanks to the fundamental changes in how we envisage Artificial Intelligence working, which have been brought about by Machine Learning. One thing is obvious, Machine Learning is a big driving force behind successes in Artificial Intelligence, though, we still have to give recognitions to other subsets of AI which are just as important or in some certain cases are the backbone for an AI application. Because AI in most cases heavily depends on ML, and both are often used together to develop complex and intelligent applications, it is quite understandable why folks around confuse them as being the same when in reality they are not. I hope this piece has helped a few people understand the distinction between AI and ML.  In my next publications, I intend to dig deeper into another hot buzzword – Deep Learning.

    Stay tuned…


    By: Ogunjale Moyosore


  • Some of the technology dreamt up in science fiction movies is now becoming a reality!
    But what about self-driving cars? Where are we at with these new innovations in transportation?
    The cell phones we carry in this age are far more powerful than Star Trek’s communicators. Our computers are talking to us with far more acuity than HAL from the first real sci-fi movie, 2001 a Space Odyssey.  And we can Skype with our friends halfway around the world….even further than Jane Jetson!
    Image:1960s Artists rendering of self-driving cars Will Hollywood’s self-driving cars turn out even better than we could have imagined?
    After doing the research for this article, I have to think so. Take a look at the latest, most credible, video-shares I could find on this foreseeable innovation.
    (I should mention that there are countless self-driving car videos on the web, but most are made by companies I can’t verify as credible, or too old to be relevant; so in terms of your time spent here today, I’m confident I found the best information for us.)
    The following four videos will bring you up to speed (pun intended) on the future of self-driving cars.
    First up, a great overview from March 17, 2015, from Protin Pictures done for TIME Magazine:



    So the safety issues with self-driving cars seem to be the major driver behind the relatively slow nature of this innovation. (Another pun. I couldn’t help myself.)  However, I wonder if the slow process is, in fact, the biggest positive of this invention.  I, personally, know many people who await the self-driving car for the “advantage” of more time on their cell phone.
    While some already are taking that liberty (yikes!), I’m not sure that’s what the earliest thought leaders were aiming for with self-driving cars. Most imagined a safer roadway and more time to actually connect with each other as passengers.

    A future filled with possiblity…

    Take a look at this next piece with Chris Urmson, the director of Google’s Self-Driving Cars Project, explaining the program’s future possibilities:

    How does it work?

    OK… now that we know the history, motivation and possible contributions of the self-driving car, how does it actually work?
    Here’s a video that brings that into focus:

    What about us?

    So this is where we are in the process of bringing forth the self-driving car.
    What about the human experience? After all is said and done, we won’t embrace it unless it lifts us up in some way. Here’s a lovely piece that demonstrates the empowering wonder this technology will bring us…

    Jump over to CNN’s full story on the latest in-house Google Self-Driving Car on cnn.com here to learn more about how this new self-driving vehicle, which “sees” 200-yards “ahead” in all directions, is different from the previous iterations of retrofitted Toyota Prius Hybrids.
    To have a look at another fabulous and much deeper article about who will win this race to invent, we found one called Can GM Beat Google to the Self-Driving Car? over at Bloomberg Business News. Take a look!



    Image: Timeline self-driving car - the past
    Source: Bloomberg
    Image: Self-driving Car timeline - Present
    Source: Bloomberg
    Image: Self-Driving Care Timeline - Far Future
    Source: Bloomberg
    Nicely done, but I’m curious about the many unasked questions related to their predictions about the future.


    Image: Flying car versus Self-driving Cars
    • Will cars have to look anything like cars that need a driver?
    • What will be the best fuel source for these cars be?
    • Will they just be for city drivers?
    • How could they expand possibility in a direction we can’t even conceive at this point?
    • Would this change the driving age for youth?
    • What would this make possible for people too old to be safe drivers, and people with severe vision problems? 
    We could go on and on with these important possibilities, but here’s a fundamental question: will “cars” even be relevant by 2040 or 2050? Will they go the way of the Fax machine and VCR tapes? What if we discover some completely unforeseeable way to transport ourselves from place to place?
    What if we discover some completely unforeseeable way to transport ourselves from place to place?

    Beam me up, Scotty!

    Articled lifted from: Ever Widening Circles




  • Image result for robots
    Researchers found they were able to infect robots with ransomware; in the real world, such attacks could be highly damaging to businesses if robotic security isn't addressed.
    Ransomware has long been a headache for PC and smartphone users, but in the future, it could be robots that stop working unless a ransom is paid.
    Researchers at security company IOActive have shown how they managed to hack the humanoid NAO robot made by Softbank and infect one with custom-built ransomware. The researchers said the same attack would work on the Pepper robot too.
    After the infection, the robot is shown insulting its audience and demanding to be 'fed' bitcoin cryptocurrency in order to restore systems back to normal.
    While a tiny robot making threats might initially seem amusing -- if a little creepy -- the proof-of-concept attack demonstrates the risks associated with a lack of security in robots and how organisations that employ robots could suddenly see parts of their business grind to a halt should they become a victim of ransomware.
    "In order to get a business owner to pay a ransom to a hacker, you could make robots stop working. And, because the robots are directly tied to production and services, when they stop working they'll cause a financial problem for the owner, losing money every second they're not working,"
    Taking what was learned in previous studies into the security vulnerabilities of robots, researchers were able to inject and run code in Pepper and NAO robots and take complete control of the systems, giving them the option to shut the robot down or modify its actions.
    The researchers said it was possible for an attacker with access to the Wi-Fi network the robot is running on to inject malicious code into the machine.

    "The attack can come from a computer or other device that is connected to internet, so a computer gets hacked, and from there, the robot can be hacked since it's in the same network as the hacked computer," said Cerrudo, who conducted the research alongside Lucas Apa, Senior Security Consultant at IOActive.
    Unlike computers, robots don't yet store vast amounts of valuable information that the user might be willing to pay a ransom to retrieve. But, as companies often don't have backups to restore systems from, if a robot becomes infected with ransomware, it's almost impossible for the user to restore it to normal by themselves.
    If the alternative for a victim of robot ransomware is waiting for a technician to come to fix the robot -- or even losing access it to weeks if it needs to be returned to the manufacturer -- a business owner might view giving into the ransom demand as a lesser evil.
    robot-ransomware.jpg
    Researchers altered the robot's code to change its behavior and demand a ransom payment.

    "If it's one robot then it could take less time, but if there are dozens or more, every second they aren't working, the business is losing money. Keeping this in mind, shipping lots of robots takes a lot of time, so the financial impact is bigger when you have a computer compromised with ransomware," said Cerrudo.
    While the robot ransomware infections have been done for the purposes of research -- and presented at the 2018 Kaspersky Security Analyst Summit in Cancun, Mexico -- IOActive warn that if security in robotics isn't properly addressed now, there could be big risks in the near future.
    "While we don't see robots every day, they're going mainstream soon, businesses worldwide are deploying robots for different services. If we don't start making robots secure now, if more get out there which are easily hacked, there are very serious consequences," said Cerrudo.
    As with security vulnerabilities the Internet of Things and other products, the solution to this issue is for robotics manufacturers to think about cybersecurity at every step of the manufacturing process from day one.
    IOActive informed Softbank about the research in January but Cerrudo said: "We don't know if they [Softbank] are going to fix the issues and when, or even if they can fix the issues with the current design."
    Responding to the IOActive research, a Softbank spokesperson said, "we will continue to improve our security measures on Pepper, so we can counter any risks we may face."


  • There have been several stories around the upcoming Samsung S9, its hardware, features and how the new Samsung flagship phone is ready to battle it out with the iPhone X.


    Here’s a roundup of everything we know about the Samsung S9.

    What does it look like?

    A rumoured leak of the S9 and S9+ from popular mobile tipster Evan Blast.




    The back of the upcoming S9 and S9+ (notice the dual rear camera).





    The upcoming Galaxy S9 phones imagined in a variety of colors. 

    The Galaxy S9 is also said to look very similar to the Galaxy S8, perhaps with slightly smaller bezels.


    Launch Date

    Samsung will launch its next flagship phone -- now confirmed to be called the Galaxy S9 -- on Feb. 25. The debut will go down next month at Mobile World Congress (MWC) in Barcelona.


    The Technology Features

    While the debate rages over Apple’s decision to replace Touch ID with Face ID on the iPhone X, Samsung appears to have its own Face ID alternative ready, while also maintaining the Fingerprint scanner at the back.

    Samsung's Face ID alternative is called ‘Intelligent Scan’ and it combines both facial recognition and iris scanning in an attempt to create something as fast and accurate as Face ID (if not more so). A code hidden in Samsung’s Android Oreo beta software uncovered the feature for the first time and now LetsGoDigital has discovered more information about this in a newly filed Samsung patent (PDF link).


    Filed in English (usually Samsung patents are in Korean), it explains how the system will combine an iris camera, a light source module (IR LED) and proximity sensor. As LetsGoDigital explains: 
    “Once a user is located at a certain distance from the device (measured by the proximity sensor), the infrared light source module and the iris camera will be switched on to take a picture of the iris. The camera is able to register both eyes, as well as a part of the face.”
    Interestingly, Samsung says this technology can be integrated not just into smartphones but also cameras, e-readers, tablets, PCs and TVs. Consequently, the company may treat it like ‘Samsung Pay’: a form of universal functionality across Samsung devices designed to lock users into the company’s ecosystem.
    As more pieces of the puzzle continue to fall into place, we also hear a rumor of a virtual fingerprint reader, a feature that made its debut on the Vivo phone at CES 2018. Bolstering these reports are the things we know about Qualcomm's next-generation Snapdragon 845 processor, which is likely to power the Galaxy S9, and which has the potential to drive serious advancements in camera and security technology.

    Camera: Two back camera plus one front camera

    Judging by rumor volume, the Galaxy S9 Plus -- and maybe also the Galaxy S9 -- are due for dual cameras on the back. A Reddit user's photo, first published by SamMobile, of a Galaxy S9 retail box lists dual rear-facing 12-megapixel cameras and a single-lens 8-megapixel front cam in addition to a 5.8-inch Quad-HD+ super AMOLED screen.  

    The Galaxy S9 could also have the company's new Isocell sensors, which support fast autofocus abilities that help it hone in on fast-moving subjects, even in dim light. This technology allows for super-detailed slow-motion video recording with 1080p resolution at 480 frames per second, meaning the video should be both crisp and buttery smooth. And the Snapdragon 845 may give the Galaxy S9 some other advantages -- including making it the first to record 4K Ultra HD video. 






    Maintaining the headphone jack

    The once-mundane 3.5mm headphone jack has become an increasingly rare, "vintage" feature that could make the Galaxy S9 that much more attractive to people who aren't ready to ditch their collection of wired headphones. Apple walked away from the legacy port with its iPhone 7 and 7 Plus and hasn't looked back, and the Google Pixel 2, Moto Z, Essential Phone and others have followed suit. 

    When can we get it?

    The devices will be announced on the 25th. There are reports that the devices will be open for pre-orders in the first week of March, and shipping will begin towards the middle of March. Last year the local launch event for the Galaxy S8 was in the third week of April, with the device becoming available at the end of April. If Samsung Gulf speed up the process and wrap up their countless bundle deals with local brands a little faster, we could get it in early-or-mid April.