Facebook … Face Recognition Woes

Facebook is in the news yet again, this time for having to face a class action lawsuit for allegedly gathering biometric information without users’ explicit consent, via facial recognition technology.

What Facial Recognition Technology?

A facial recognition technology feature in Facebook’s platform suggests who might be present in uploaded photos, based on an existing database of faces, and uses “tag suggestions” technology.

The feature works by trying to detect any faces in an uploaded photo, standardises and aligns those faces for size and direction, then, for each face, Facebook computes a face signature which is a mathematical representation of the face in that photo. Finally, the face signatures are run through a stored database of user face templates to look for similar matches

What’s The Problem?

The problem in legal terms is that the software allegedly gathers (and presumably stores) biometric information about individuals i.e. makes and stores face templates of them, without them giving their explicit consent for it to do so. This sounds as though it may breach Illinois state law – this is the state from which the class of people in the lawsuit question is made up.

The court order is reported to apply to Facebook users in Illinois for whom Facebook created and stored a face template after 7 June 2011.

What Are The Chances?

Although Facebook reportedly intends to fight the case and believes that it has no merit, the fact that the judge, James Donato, has ruled to certify a class of Facebook users, and has said that Facebook could be expecting billions in statutory damages, does not appear to bode well for Facebook.

Not Available Here

Privacy regulations mean that the facial recognition and tagging feature is not available in Europe or Canada, and can be turned off in settings for US users.

Facebook also said back in December 2017 that users would be notified if a picture of them was uploaded by someone else, even if they hadn’t been tagged in it.

Hearing In A Crowd Technology Developed By Google

Just as Facebook appears to be in trouble over voice technology, Google has announced that its research team has just developed technology that can recognise individual voices in a crowd, just as a human can.

The tech giant has made a demonstration video for the technology. The video shows how, with lots of people talking at once in a room, a user can select a particular face and hear the soundtrack of just that person. Users of this technology can also select the context of a conversation, and only references to that conversation are played, even if more than one person in the room is discussing that subject matter.

The AI technology behind the feature was developed using data collated from 100,000 videos of lectures and training videos on YouTube.

What Does This Mean For Your Business?

With GDPR on the way, the case against Facebook’s voice recognition technology is another reminder of how businesses need to get to grips with the sometimes complicated area of consent. Video images and face templates of individual faces are also likely to qualify as personal data that consent for collection and storage will be needed for under GDPR. Privacy, as well as security, is a right that is getting even greater protection in law.

The technology from Google that can recognise individual voices, and can follow individual conversations in crowds could unlock valuable business opportunities in e.g. improving the function and scope of hearing aids, or improving video conferencing tools by enabling them to take place in the middle of an office space rather than only in a separate, soundproofed meeting room (provided other visual distractions are minimised). It seems that new technology is beginning to be developed to help tackle age-old human challenges.

Google, The Law and Your ‘Right To Be Forgotten’

A businessman has won the “right to be forgotten” by Google after taking his case to the High Court, because he wanted a past crime he had committed to be removed from Google’s search engine results.

What Crime?

The (un-named) businessman was hoping to remove details from Google of a conviction from 10 years ago, and of the six months jail sentence he was given for ‘conspiring to intercept communications’. The businessman was forced to take Google to court after Google refused his requests to have the information removed from its search engine results. The man’s legal argument was that the details of his past conviction were disproportionately impacting his life, and were no longer relevant, and therefore, it was not it was not in the public or the man’s interest for Google to show the details in searches.

What Does The “Right To Be Forgotten” Mean?

The legal precedent for what has become known as ‘the right to be forgotten’ was set by the Court of Justice of the European Union back in 2014. It was the result of a case brought by Spaniard Mario Costeja Gonzalez who had asked Google to remove information about his financial history from its search engine results.

In this particular case, the ‘right to be forgotten’ means that Google has to remove all search results about the businessman’s conviction, including links to news articles.

Had Shown Remorse

The judge ruled in favour of the businessman, stating that he had shown remorse. Google has said that it will respect the judgement made in the case and pointed out that it has removed 800,000 pages from its results following ‘right to be forgotten’ requests.

Not So Lucky

Another businessman who also brought a ‘right to be forgotten’ case against Google, and who had committed a more serious crime of ‘conspiring to account falsely’ was not so lucky, and lost his case. It was decided, in the High Court, that the man, who had spent four years in jail for the crime, had “mislead the public”, and that it would still be in the public interest for Google to keep the information about the man and his crimes in the search engine results.

Less Than Half

Google’s own Transparency Report from May this year revealed that of the 2.4 million requests made since 2014 to remove certain URLs from its search results, Google has only complied with less than half. Google doesn’t actually have to comply with a request, and can refuse to take links down if can demonstrate that there is a public interest in the information remaining in the search results. Google can also re-instate links that it has already taken down in a previous request if it can show that it has grounds to do so.

What Does This Mean For Your Business?

It is good news that powerful international tech companies whose services are widely used, and who have the power to influence opinion and affect lives can sometimes be held accountable to national courts. There is a strong argument that they should not be a law unto themselves, and that they may not always be the best party to judge what is in the public interest.

The ‘right to be forgotten’ is particularly significant because it is something that all EU citizens will have when GDPR comes into force next month. This will impact businesses, many of whom may expect to receive ‘right to be forgotten’ requests, and will need to get their data management in order to both comply with GDPR generally, and to be able to respond quickly to such requests and avoid possible fines.

Facebook Notifies People Affected By Scandal

Facebook has begun notifying any of those users whose data is known to have been harvested and shared with data mining firm Cambridge Analytica.

On Your News Feed

If you are one of the 87 million people whose data has been shared, 1 million of whom are in the UK, when you log into your Facebook account, you will see a detailed message beginning with the words “We understand the importance of keeping your data safe.”

It is now understood that the data of 2.2 billion Facebook users was actually shared by Facebook, and all of these users will be receiving a message entitled “Protecting Your Information”. This message will include a link which will allow them to see what apps they use, and what information they have shared with those apps. Users will also be given the option to stop sharing information with the apps or to stop any access to third-party apps altogether.

It should be noted, however, that Facebook stopped allowing third-party apps from gathering data about the likes, status updates and other information shared by users’ friends back in 2015. Also, Facebook has taken action recently to make information such as religious and political views out-of-bounds to apps.

If you don’t trust Facebook to notify you if your information has been shared with Cambridge Analytica, you can check for yourself by following this link: https://www.facebook.com/help/1873665312923476?helpref=search&sr=1&query=cambridge

What Happened?

This relates, of course, to revelations that Facebook shared the data of its users with London-based data mining firm Cambridge Analytica via a personality quiz app, called “You Are What You Like” (later replaced by the “Apply Magic Sauce” app), that had reportedly been developed for legitimate academic purposes. Revelations that the website from the original quiz re-directed uses to a new one with different terms and conditions, thereby enabling users data to be harvested and reportedly used for political purposes by Cambridge Analytica (the same company used by the Trump election campaign) and by Canadian data company AggregateIQ (AIQ) who were involved in the Vote Leave campaign in the UK referendum, have caused wide-scale outrage.

Facebook is also reported to have suspended a data analytics firm involved with targeted advertising and marketing called Cubeyou. Cubeyou is reported to have collected data for academic purposes, and allegedly used it commercially, as part of a partnership with Cambridge University in the UK (who have also found themselves implicated in the scandal).

Game Changer Says ICO Chief

The head of the UK’s Information Commissioner’s Office (ICO), Elizabeth Denham, has said that what happened with Facebook’s data sharing with Cambridge Analytica can be seen as a game-changer in data protection. The ICO has revealed that Facebook is now one of 30 organisations under wider investigation for the sharing and use of personal data and analytics with political campaigns, parties, social media companies and other commercial organisations.

Denham has said that although the Facebook scandal has drawn attention to the ICO’s ‘Your data matters’ campaign, it is too early to say whether the changes the social networking firm is making are sufficient under the law.

What Does This Mean For Your Business?

If you have been directly affected by Facebook’s data sharing you will have been informed in your Facebook account, and you can follow the link (given earlier in this article) to check for yourself.

As ICO Chief Elizabeth Denham has rightly said, this is an important time for privacy rights, particularly since the introduction of GDPR is little more than a month away. The widespread outrage and condemnation of Facebook’s data sharing with Cambridge Analytica highlights how important data protection and privacy rights are to us all. This should serve as a reminder to businesses and other organisations that as well as making sure that they comply with GDPR to avoid negative consequences, GDPR preparation is an opportunity to fully examine the important issue of how data is being used and stored, and where vulnerabilities are, and how simple improvements could be made that could protect and help the business as a whole.

1 In 10 Fooled By Social Engineering Attacks

A new report by security firm Positive Technologies shows that 1 in 10 employees would fall for a social engineering attack.

What Is A Social Engineering Attack?

Social engineering cyber-attacks rely upon the element of human error e.g. convincing / fooling a person into downloading malicious files, unwittingly corresponding with cyber-criminals, sharing contact information about employees and transferring money to hackers’ accounts, or clicking on phishing links.

Test

The results of the report are based on ‘penetration tests’ which involved sending 3,300 emails to employees containing links to websites, password entry forms and attachments. As the name suggests, a penetration test is an authorised simulated attack on a computer system, which is performed in order to evaluate the security of that system.

Tricked

The results showed that, worryingly, 17% of the messages were successful in convincing the recipients to take actions that would have resulted in a compromise of a workstation and potentially the entire corporate network if the attack was real.
The tests showed that 15% of employees responded to emails with an attachment and link to a web page, while only 7% responded to test emails with an attachment. The most effective method of social engineering identified in the test was reported to be sending an email with a phishing link. In this case, 27% of recipients clicked on a link that led to a web page requesting credentials.

Real Company Names Convincing

The study showed that messages received from what appeared to be the account of a real company resulted in 33% or risky actions being taken by recipients, whereas messages from fake companies only resulted in 11% success.

Emotional Response Sought

Cyber-criminals often use methods that are designed to produce an emotional response that will make people forget about basic security rules. For example, in the tests, an email subject line of ‘list of employees to be fired” resulted in a 38% response, and “annual bonuses” brought a 25% response.

Overly Trusting If Not In IT

One interesting finding highlighted in the report was that 88% of those outside of IT work (and presumably less aware of the risks), such as accountants, lawyers and managers, opened / clicked on suspicious links and even corresponded with attackers. However, 3% of security professionals also responded.

Kept Trying To Open

The study found that some recipients who couldn’t open the malicious files even resorted to trying to open the files or enter their password on a fake site up to 40 times!

What Does This Mean For Your Business?

Clearly, there is a case for better education and training among employees about the variety of methods, and the level of sophistication that cyber-criminals now use in attacks. Employees need to be able to spot potential attacks, and have clear policies, instructions, and help on hand about how to proactively protect the company, and how to respond to certain types of attack. One of the simplest forms of defence against threats entering the company via email is to make it policy never to open suspicious emails / emails from unknown sources.

In reality, attackers now use a combination of methods to breach the defences of companies, plus there are evolving new threats, such as fileless hacking and fileless malware attacks facilitated by the PowerShell scripting language that is already built-in to Windows. Some basic ways that your business can improve security against social engineering attacks are :

  • Blocking delivery of email attachments with extensions that are executable e.g. (.exe, .src), system (.dll, .sys), script (.bat, .js, .vbs), and other files (.js,.mht, .cmd).
  • Authenticating the domain of an email sender e.g. using the Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM) protocols.
  • Authenticating a sender’s identity using other protocols e.g. Domain-based Message Authentication.
    Conformance (Dmarc) protocol.
  • Regularly updating the operating system, anti-virus, and other software patches.
  • Implementing an on-demand malware detection system.
  • Scanning files before and after opening them.

Killer Bot Boycott

Reports that the state-run university-based ‘Korea Advanced Institute of Science and Technology’ (KAIST) has been working on military robot research with defence company Hanwa have resulted in threats of a boycott by more than 50 AI researchers from 30 countries.

Killer Robots?

Although the threat of the boycott of KAIST appears to have been effective in exposing and causing KAIST to agree to stop any work related to the development of lethal autonomous weapons (killer robots), the story has raised questions about ethical red-lines and the regulation of technology in this area.

KAIST opened its research centre for the convergence of national defence and artificial intelligence on 20 February, with the reported intention of providing a foundation for developing national defence technology. It has been reported that a now-deleted announcement about the work of the centre highlighted a focus on areas like AI-based command and decision systems, navigation algorithms, large-scale unmanned undersea vehicles, AI-based smart aircraft training systems, as well as smart object tracking and recognition technology.

Fast Exchange of Letters

It has been reported that almost immediately after a letter containing the signatures of more than 50 AI researchers expressing concern about KAIST’s alleged plans to develop artificial intelligence for weapons, KAIST sent its own letter back saying that it would not be developing any lethal autonomous weapons.

The President at the university, Shin Sung-chul, went on to say that no research activities that were counter to human dignity, including autonomous weapons lacking meaningful human control, had been conducted. Shin Sung-chul is also reported as saying that KAIST had actually been trying to develop algorithms for “efficient logistical systems, unmanned navigation and aviation training systems”, and that KAIST is significantly aware of ethical concerns in the application of all technologies including AI.

Who / What Is Hanwha Systems?

Hanwha Systems, the named partner from the defence / military world in the project, is a major weapons manufacturer based in South Korea. The company is known for making cluster munitions, which are banned in 120 countries under an international treaty.

Outright Ban Expected

To accompany the welcome re-assurances from KAIST that it will not be researching so-called “killer robots”, it is widely expected that the next meeting of the UN Security Council countries in Geneva, Switzerland will call for an outright ban on AI weapons research and killer bots.

Already Exists

As well as the Taranis military drone, built by the UK’s BAE Systems, which can technically operate autonomously, ‘robots’ with military applications already exist. For example, South Korea’s Dodaam Systems manufactures a fully autonomous “combat robot”, which is actually a stationary turret that can detect targets up to 3km away. This ‘robot’ is reported to have already been tested on the militarised border with North Korea, and is reported to have been bought by the United Arab Emirates and Qatar.

What Does This Mean For Your Business?

Many of the key fears about AI and machine learning centre on machines learning to make autonomous decisions that result in humans being injured or attacked. It is no surprise therefore, that reports of possible research into the development of militarised, armed AI robots play on fears such as those expressed by Tesla and SpaceX CEO Elon Musk who famously described AI as a “fundamental risk to the existence of civilisation.”

Even with the existing autonomous combat turret in Korea there are reported “self-imposed restrictions” in place that require a human to deliver a lethal attack i.e. to make the actual attack decision. Many fear that the development of any robots of this kind represents a kind of Pandora’s box, and that tight regulations and built-in safeguards are necessary in order to prevent ‘robots’ from making potentially disastrous decisions on their own.

It should be remembered that AI presents many potentially beneficial opportunities for humanity when it is used ethically and productively. Even in a military setting, for example, an AI robot that could e.g. effectively clear mines (instead of endangering more humans) has to be a good idea.

The fact is that AI currently has far more value-adding, positive, and useful applications for businesses in terms of cost-cutting, time-saving, and enabling up-scaling with built-in economies.

UK Universities Are Cryptojacking Targets

The latest attacker behaviour industry report by automated threat management firm Vectra shows that UK higher education institutions are now prime targets for illicit cryptocurrency mining, also known as ‘cryptojacking’.

Cryptocurrency Mining

‘Cryptocurrency mining’ involves installing ‘mining script’ code such as Coin Hive into multiple web pages without the knowledge of the web page visitor or often the website owner. The scammer then gets multiple computers to join their networks so that the combined computing power will enable them to solve mathematical problems. Whichever scammer is first to solve these problems is then able to claim / generate cash in the form of crypto-currency – hence mining for crypto-currency.

Taking Coin Hive as an example, this crypto-currency mining software is written in Javascript, and sends any coins mined by the browser to the owner of the web site. If you visit a website where it is being used (embedded in the web page), you may notice that power consumption and CPU usage on your browser will increase, and your computer will start to lag and become unresponsive. These slowing, lagging symptoms will end when you leave the web page.

Why Target Universities?

According to Vectra report, the UK’s universities are being targeted by cryptojackers because they have high bandwidth capacity networks, and they host many students on their networks who are not protected. This makes them ideal cyber-crime campaign command and control operations centres.

This means that students who are using the bandwidth e.g. to watch movies online could unwittingly be giving cyber criminals access to computing resources in the background by using websites that host cryptojacking malware.

It is also believed to be possible that the relative anonymity and power of the computing resources at universities are enabling a small number of students to tap into them, and carry out illicit cryptocurrency mining activities of their own.

Other Targets

Higher education institutions are, of course, not the only main targets. The report highlights the entertainment and leisure sector (6%), financial services (3%), technology (3%) and healthcare (2%) as also being targets for cryptojackers. The effects of being targeted by cryptojackers can be increased power consumption and a reduction in hardware lifespans.

What Does This Mean For Your Business?

For higher education institutions, they can only issue notices to students they detect cryptomining, and / or issue a cease and desist order. They can also provide assistance in cleaning computers, and try to advise students on how to protect themselves and the university by installing operating system patches and creating awareness of phishing emails, suspicious websites and web ads. These measures, however, don’t go far enough to address the challenge of better detection, and / or stopping cryptomining from happening in the first place.

Businesses are also struggling to keep up with the increasingly sophisticated activities of cryptojackers and other cyber-criminals, particularly with a global shortage of skilled cyber-security professionals to handle detection and response. In the meantime, the answer for many enterprise organisations has been the deployment of artificial intelligence-based security analytics. Where cryptojacking is concerned, AI is proving to be essential to augmenting existing cyber-security teams to enable fast detection and a response to threats.

The increased CPU usage and slowing down of computers caused by mining scripts waste time and money for businesses. If using AI security techniques are beyond your current budget and level of technical expertise, you may be pleased to know that there are some more simple measures that your business can take to avoid being exploited as part of a cryptojacking scam.

If, for example, you are using an ad blocker on your computer, you can set it to block one specific JavaScript URL which is https://coinhive.com/lib/miner.min.js . This will stop the miner from running without stopping you from using any of the websites that you normally visit.

Also, a dedicated browser extension called ‘No Coin’ is available for Chrome, Firefox and Opera. This will stop the Coin Hive mining code being used through your browser. This extension comes with a white-list and an option to pause the extension should you wish to do so.

Coin Hive’s developers have also said that they would like people to report any malicious use of Coin Hive to them.
Maintaining vigilance for unusual computer symptoms, keeping security patches updated, and raising awareness within your company of current scams and what to do to prevent them, are just some of the ways that you could maintain a basic level of protection for your business.

£870 Million Super-Cyber-Crook Captured

The suspected leader of the criminal gang behind the Cobalt and Carbanak malware campaigns that targeted banks and netted £870 Million has been arrested in Spain.

The Carbanak & Cobalt Malware Attacks

Cobalt and Carbanak are names of the different generations of malware, increasing in sophistication – 3 were used in all – which the cyber-criminal gang were able to introduce to 100 banks and other financial networks in 40 countries.

Anunak was the first malware campaign to be used by the gang in late 2013. This was followed the same year by Arbanak, which was used in until 2016. Finally, the gang used more sophisticated attacks involving tailor-made malware based on the Cobalt Strike penetration testing software.

EUR 10 Million Per Heist

Cumulative losses to the gang from financial institutions are believed to be in the region of EUR 1 billion, and the Cobalt malware alone allowed criminals to steal up to EUR 10 million per heist.

Sent To Key Staff Members In Emails

The malware was sent to key staff members in booby-trapped phishing emails. When the computers of key staff members became infected with the malware e.g. by being tricked into opening the booby-trapped emails from the criminals, the gang was able to gain remote access to the banking networks to steal money.

Money was stolen by using remote access to order ATMs to dispense money at specific times (collected by gang members), and by altering databases to increase account balances so that more ‘mules’ could be used to collect even more money from inflated accounts via chosen ATMs.

Stolen money was also laundered via crypto-currencies and payment cards which enabled the purchase of luxury goods and houses.

Carbanak was claimed to have been discovered in 2014 by the Russian/UK Cyber Crime Company Kaspersky Lab.

Arrested

The person (as yet un-named by authorities) believed to have masterminded the crimes was arrested in Alicante, Spain. The arrest was the result of a complex investigation by the Spanish National Police, supported by Europol, the (US) FBI, the Romanian, Moldovan, Belarussian and Taiwanese authorities and private cyber security companies.

What Does This Mean For Your Business?

It’s all-too-often that we hear of major hacks and security breaches of businesses and organisations but it is rare to hear about the culprits being caught. The remote and often invisible nature of the crimes, coupled with the anonymity and complexity of the methods of attack and money collection tends to make cyber criminals difficult to apprehend. A combined and expert effort is needed, which is what has happened in this case, and it can only be good news for businesses worldwide that one key player appears to have been caught.

More cynical commentators may say that it was the large sums of money involved, and the facts that banks and financial institutions were victims that prompted such and effort to catch the perpetrators, something that, perhaps, smaller businesses may not expect when they are targeted, even though the results of an attack may be more devastating.

This story is also a reminder that not only are many attacks sophisticated, but human error by staff members is still an important element in allowing successful cyber attacks to take place. Cyber security is the responsibility of all of us, and companies and organisations should make sure that all staff receive training about likely cyber threats and what procedures to follow when dealing with emails or requests to transfer money. Making it a rule to never open unknown emails is one basic way of counteracting the serious threat posed by malware.

Facebook Revamps Privacy Settings

In a move that Facebook says was due to happen before the recent personal data harvesting scandal, the social media giant has updated its privacy tools to make users more informed and in control.

50 Million Profiles Harvested

The high-profile outcry that followed revelations over data from 50 million profiles that were harvested for use by Cambridge Analytica has resulted in around £56bn being wiped off Facebook’s market value since 16 March.

It is also unknown as yet how much damage has been done to the Facebook brand and the trust placed in it by users, although some commentators have suggested that Facebook is so much a part of daily life for people, and there is a lack of real alternatives, that the damage in terms of user loyalty may not be as bad as the media has suggested.

Changes

Even though Facebook has suggested that privacy settings changes were on the cards long before this latest scandal hit the headlines, some commentators must feel justified in saying that it is no coincidence that Facebook have announced on their Blog this week, changes to the platform that are intended to help people understand how Facebook works and the choices they have over their data.

In summary, the changes that Facebook has announced are:

  • Generally making data settings and tools easier to find. In short, a re-designed settings menu on mobile devices means making everything accessible from a single place, plus, outdated parts have been cleaned up to clarify what information can and can’t be shared with apps.
  • There is a new ‘Privacy Shortcuts’ menu where you can:
    – Add more security e.g. add more security layers e.g. two-factor authentication.
    – Review what personal information you’ve shared and delete it if you want to – this includes posts you’ve shared or reacted to, friend requests you’ve sent, and things you’ve searched for on Facebook.
    – Manage the information you give that will influence the type of adverts you’re shown.
    – Manage who sees you posts and profiles.
  • The introduction of a new ‘Access Your Information’ section where you can securely access and manage e.g. posts, reactions, comments, and things you’ve searched for, as well as being able to delete anything from your timeline or profile that you no longer want on Facebook.
  • Giving you the ability to download a secure copy of the data that you’ve shared with Facebook, and giving you the option to move it to another service. This includes photos you’ve uploaded, contacts you’ve added to your account, and posts on your timeline.

More Changes To Come

Facebook has also said that in the coming weeks, it will be proposing updates to its terms of service and its data policy to better spell out what data it collects and how it uses it. Facebook is keen, in the light of the recent scandal, to point out that the updates are about transparency, and not about gaining new rights to collect, use, or share data.

Some commentators have suggested that Facebook also intends to make the link to fully delete an account more prominent.

Acknowledges Trust Damage

Facbook has acknowledged that it has lost peoples’ trust and it needs to get to work on regaining it, and no doubt, the hope is that these changes (that Facebook has worked on with regulators, legislators and privacy experts) are intended as an initial offering in the move to achieve that as well as to make the platform more GDPR-ready.

What Does This Mean For Your Business?

Yes, there is an element of Facebook needing to get something positive out there quickly to show that it’s doing something in response to media and public opinion about the damaging recent scandal. These changes are also, however, a clear move by Facebook to make sure that it will be GDPR compliant when the new regulation comes into force in May. The sheer size of Facebook’s customer base, and the company’s earnings mean that the company is very aware of the challenges that GDPR could bring e.g. with data breaches and with GDPR in force, Facebook could potentially be looking at fines of 4% of its global turnover. It’s no wonder, therefore, that the changes to the privacy settings of the platform have been made now.

50 Million Facebook User’s Data With Cambridge Analytica

Facebook is at the heart of a storm after a whistleblower alleged that the data analytics firm that worked with Donald Trump’s election team and the winning Brexit campaign harvested 50 million Facebook profiles from a data breach.

Why?

London-based data analytics company, Cambridge Analytica, which was once headed by Trump’s key adviser Steve Bannon, has been accused of illegally harvesting 50 million Facebook profiles in early 2014 in order to build a software program that could predict and use personalised political adverts to influence choices at the ballot box in the last U.S. election.

Under Investigation

Cambridge Analytica is already the subject of two inquiries in the UK. The first is by the Electoral Commission which is looking into the company’s possible role in the EU referendum. The second is by the Information Commissioner’s Office which is looking into the company’s possible use of data analytics for political purposes.

Also, the company is the subject of an investigation in the US over possible Trump-Russia collusion.

It has been reported that Elizabeth Denham, the head of Britain’s Information Commission, is seeking a warrant to search the offices of consultancy Cambridge Analytica over the breach.

Facebook Under Scrutiny

Facebook has, of course, faced strong criticism over the breach, one tangible result of which has been nearly $40 billion off its market value as Facebook’s investors have become worried that damage to the reputation of the social media giant’s network will deter users and advertisers.

In a BBC radio report, the ICO’s chief Elizabeth Denhan said that the ICO is looking at whether or not Facebook secured and safeguarded personal information on its platform, and whether Facebook, when they found out about the loss of the data, acted robustly and whether or not people were informed.

Also, the head of Britain’s cross-party Media parliamentary committee is reported to have written to Facebook’s Mark Zuckerberg asking for more information by Monday 26 March, and in Dublin, Ireland’s privacy watchdog (the lead regulator for Facebook in the European Union) has said that it is following up with Facebook to clarify its oversight.

Harvested By Kogan’s App

It has been reported that the data was harvested from Facebook by an app on Facebook’s platform, created by British academic, Aleksandr Kogan, that was downloaded by 270,000 people, providing access to their own and their friends’ personal data too. It has been reported that Kogan says he changed the terms and conditions of his personality-test app on Facebook from academic to commercial part way through the project.

Facebook has said that Kogan violated its policies by passing the data to Cambridge Analytica, and Facebook was told that the data has since been destroyed, and has made its own efforts to obtain proof that it has been destroyed.

Mr Kogan has said on BBC radio that he was advised that the app was entirely legal, and that he thinks he’s being made a scapegoat for Facebook and Cambridge Analytica.

This latest incident sees Facebook back in hot water following on from reports of how its platform was used by outside interests for posts and adverts that were designed to influence the result of the US election. The share price has been impacted significantly this week.

What Does This Mean For Your Business?

There are so many worrying facets to this story, not least that personal data may not have been protected well enough to allow it to be harvested by an app on the platform, and then passed to a third-party that allegedly used it to create a tool to influence elections. Also, it has been several years since the breach happened, and news of the breach has only just been released. Some industry insiders have described the incident as ‘horrifying’, and many may rightfully believe that Facebook has a lot of questions to answer, as does Cambridge Analytica.

Facebook will be painfully aware that if the ICO’s investigations find Facebook to be at fault, the social media giant could be looking at a fine of up to 500,000 pounds ($700,000), and with the introduction of GDPR in May, it could be facing fines of up to 4% of its global turnover.

Also, Facebook is a major advertising platform for businesses, and some marketing commentators have pointed to the fact that scrutiny of Facebook over this latest issue could impact Facebook’s ability to gather and deploy data for ad targeting, which has been vital to ad efficacy and budget growth.

All the recent bad publicity about Facebook has seen the number of daily users in the United States and Canada fall for the first time in its history, dipping in the company’s home market by 700,000 from a quarter earlier to 184 million.

We haven’t heard the half of this story yet, and it remains to be seen what information will be released in the coming days and weeks and as the result of numerous investigations.

Camelot Hack – ‘It Could be You!’

Lottery operator Camelot has announced that 150 customer accounts have been affected by a hack that took place prior to Friday’s £14-million draw at 8.30pm.

Low Level

The company has described the hack as ‘low level’ and has stressed that no money was stolen, and that the attackers only saw limited information. Camelot attributed the early discovery of the attack to its regular security monitoring which, in this case, detected suspicious activity on a small number of accounts.

Credential-Stuffing

The kind of hack that took place was a method known as ‘credential-stuffing’. This hack uses a list of passwords taken from other websites that have been circulated online e.g. on hacking groups / on the dark web. This method relies on people using the same password for multiple websites.

Suspended Accounts + Change Passwords

Camelot has said that it has directly contacted the customers whose accounts had been affected and all of the affected accounts have now been suspended. The company has also advised all 10.5 million National Lottery players to change the password on their online accounts.

Warned In November 2016

Back in November 2016, Camelot announced that it believed that as many as 26,500 online National Lottery accounts had been hacked using login details that had been stolen from elsewhere (e.g. a list of stolen passwords circulated online). At the time, Camelot said that it believed that suspicious activity appeared to have taken place in fewer than 50 of the hacked accounts.

Camelot re-assured customers by saying that it didn’t hold full debit card or bank account details in the online accounts for National Lottery player, and no money had been taken or deposited.

Criticism

Although, as in the latest hack, Camelot was quick to submit a breach report to The Information Commissioner’s Office, some critics voiced concerns and suspicion that there could have been some kind of deficiency in the system to allow 26,500 correct logins while saying that the details were not taken from Camelot’s servers.

What Does This Mean For Your Business?

If you have an online National Lottery account, change the password as soon as possible.

This story illustrates one of the main dangers of using the same passwords for multiple accounts. If there is a hack and theft of your login details from just one website, you could be in danger of falling victim to cyber-crime as those details are circulateing among other hackers and used for credential-stuffing attacks. The advice is, therefore, to change your passwords regularly and avoid using the same password for multiple accounts.

This story is also a reminder that businesses have a legal responsibility to protect customer data, and this responsibility will be enforced even more rigorously, and with the threat of very large fines for non-compliance with the introduction of GDPR in May this year.

One positive aspect of this story is that Camelot appear to have been proactive in their monitoring of customer account activity, were quick to inform the Information Commissioner’s Office, publicly announced the hack, and gave clear advice to customers (unlike many other companies). This story is also an example of why having a good Disaster Recovery Plan is important.