Biometrics and Privacy in 2015: Maintaining a Delicate Balance


As biometric technology is becoming mainstream, will it attract more privacy concerns?

With the rapid adoption of biometric identification technology by Apple and other giant mobile phone manufacturers, biometric technology has become a mainstream technology in 2015 by opening availability to more people for daily use. Does this mean that our privacy is even more at risk from snoopers, hackers, and identity thieves?

The answer is — not really. Today we are going to discuss how biometric technology is maintaining a delicate balance with privacy concerns as its use proliferates across many different markets. Continue reading →

Florida Ruling Highlights Continued Urgency to Educate Public on Biometric Technology

biometric identification management technology helps school lunch lines move faster

A student scans their fingerprint in a school lunch line for payment.

You may have heard that the State of Florida recently voted to ban the collection of biometric data from school students. The legislation was a direct response to several Florida school districts capturing student biometric data and using it for various purposes including purchasing lunch in school cafeterias and tracking students on school buses. Ongoing concerns over the protection of student biometric data as well as who has access to it sparked discussion on the use of the technology in schools and prompted legislators to stop it.

One major concern is the storage of biometric information and how secure the system of encryption and verification is. Most, if not all systems work under the principle that it’s not student biometrics that are actually stored, but it is instead a numerical sequence used for verification. The worry is that criminals will find a way to steal a student’s biometric template, reverse engineer it, and then use it to access the current system or another one that relies on the same biometric credential. A legitimate concern since biometrics are quite different from and ID card or token which when lost or comprised, can be replaced. Biometrics on the other hand, are said to be an “irrevocable” attribute since they are based on human physiological characteristics and can’t be “replaced.”

In response to the FL State Legislator’s decision to ban biometrics in schools, Janice Kephart from the Security Identity and Biometrics Association (SIBA) made the following statement:

“I’m concerned this precedent could spill over to other states due to mostly a lack of education on what these systems do or don’t do,” Janice Kephart, the founder of the Secure Identity & Biometrics Association (SIBA) and an outspoken advocate for the use of new authentication technologies said in a recent interview with “It’s really concerning.”

After a thorough review of the legislation, Ms. Kephart went on to say that the logic used as the body of the bill was based on “misunderstood science” and essentially penalizes the entire State for the actions of 2 districts who failed to properly notify parents and secure their permission for students to “opt-in” to having their biometric credentials captured. If you read statements from FL lawmakers on the issue, it’s clear that the genesis of their actions seem to be tied more into constituent fear of “Big Brother” and privacy/civil liberty violations then arguments based on fact about how the technology actually works. The use of palm vein biometrics in Pinellas County school lunch lines for example is a clear illustration of how the technology can be misunderstood.

If one were to extrapolate the argument that student biometric data from a palm vein reader could easily be stolen and used by a criminal, the argument seems flawed when you look at the facts about the science. Fujitsu, the company who manufacturers the palm vein device has clearly stated that they use multiple layers of encryption to secure biometric information and don’t even capture an image of the palm vein but instead convert it into a template with a private encryption key. Furthermore, Fujitsu relies on the unique hemoglobin through the bloodstream as a “liveness detection” security measure which again makes the technology virtually impossible to spoof and use another person’s credentials to access a system. Ultimately, is it possible to “steal” someone’s biometric credentials and reverse engineer them to create an image whether it’s fingerprint, palm vein, iris, or another biometric modality. The answer is that anything is possible in this day and age, but the chances of it actually happening are extremely remote. One read at some of the logic behind the FL State legislation and you would think that it’s a piece of cake to recreate a student’s biometric credentials.

Unfortunately, the biometrics industry often falls victim to misperceptions about how the technology actually works and these can be magnified by people who are intent on stopping the inevitable advancement of this technology as a more modern identification platform. As most know, in life perception tends to be 9/10 of reality and this has never been more evident than in biometrics. People who do not completely understand the technology but perceive government as rapidly encroaching on our personal lives and the slow disappearance of personal privacy in our digital world jump on biometrics as just another tool to control our lives. In reality, biometrics is used all over the world and has drastically improved security, saved a countless amount of money, resources, and time for business and governments, and continues to be used in new and creative ways to establish accountability and protect individual privacy.

It’s crystal clear that the biometrics industry has a lot of work left to do when it comes to public education on how the technology works. We hope that biometric vendors take this call to action seriously and embark or continue their push to educate and inform so more rational decisions can be made about the use of this technology in the general public. We need to be taking steps forward in biometrics, not steps back.

After all: Truth is universal. Perception of truth is not.

In what ways do you feel the biometrics industry can better educate the public about the technology?


Privacy: Will Mobile Apps with Biometric IDs Help Advance Biometrics Acceptance?

will biometric mobile authentication take a step forward?

Will the use of biometrics for mobile device authentication help advance acceptance of the technology?

The following guest post is by Nicole Williams, professional blogger.

Biometrics seemed like such a futuristic term just a few years ago, but now it’s here and according to CNET, it’s predicted to be a ‘common’ form of security by 2015. However, many companies are concerned about whether biometrics will offer a viable security solution and consumers are worried about whether biometrics will violate their privacy by using their stored data. Many of these concerns are caused by a lack of understanding surrounding biometric security systems.

Many people are unaware that fingerprint scanners and voice recognition apps are forms of biometric security. Millions of mobile device users download these apps as a first line of defense to secure their text messages, phonebook contacts and images. Since there are many ways for data thieves to get past patterns, passwords or number codes, they can only secure a device to a certain degree. This is bad news for businesses that subscribe to the BYOD trend. In these businesses, employees are encouraged to work from their own devices both on and off-premise. These devices hold valuable data about clients and the business itself, so unauthorized access could spell danger.

This year’s widely publicized data attack on the retail giant Target, has raised some concerns about how data is stored and accessed. Security experts believe that biometrics could have provided an iron wall of protection around this data, preventing the attack from occurring in the first place. However, with so many businesses lacking information about biometric security, this unfortunate incident was followed by many others. Biometrics work by providing an added level of security that only the user can get past. Since many people are already using biometric apps to secure mobile devices, it is predicted to become the most popular form of device and data security for both businesses and private use.

How Biometrics Work

Every person has a distinct pattern on their fingertips, in their eyes and in their DNA. Biometric scanners take images of these patterns and compare them to future images. This is very similar to the blink method that astronomers use to track changes in the night sky. Astronomers take pictures of the night sky then they take pictures of the same section of the sky again. They use a computer program to compare the images and the slightest change is noted immediately. With biometrics, the patterns must match for access or access is denied immediately. There are pros and cons to using biometrics (i.e. cost vs ROI and ease of use vs benefit), but the pros greatly outweigh the cons.

All in all, biometrics are becoming a more acceptable way of securing data thanks to the introduction of biometrics on mobile devices. The average user can see how biometrics work and the benefits of using them in a non-threatening situation. This increases the likelihood of them accepting biometrics for other uses such as ATM access, business or home premise access and security alarm access.

The key here is to continue to educate the user on the benefits of biometrics and to find easy to use solutions that require a relatively short learning curve. As more mobile app and computer manufacturers use biometrics as a first point of access to data, consumers and businesses will grow more comfortable with using them as well.

Nicole Williams is a guest blogger for M2SYS Technology as she writes about the relationship between biometrics for mobile ID and increasing public acceptance of biometricsAbout the author: Nicole Williams is a keen technology enthusiast and enjoys blogging about topics like technology and productivity. She is a professional blogger who currently writes for Micro Com Systems.




Additional References:

5 Patient Identification and Data Matching Issues the New HIMSS “Innovator in Residence” Must Address

accurate patient identification and data matching are important issues for the healthcare industry

The new HHS “Innovator in Residence faces some tough issues on patient identification and data matching.

On the heels of the recent announcement by HIMSS and the Department of Health and Human Services to hire an “Innovator in Residence” and make progress on the establishment of a nationwide patient data matching strategy, we thought it would be pertinent to outline some of the issues this person will face that require careful consideration. If the end goal is to establish a more consistent, industry standard approach that redefines patient identification and data matching accuracy, this new leader faces some tough challenges on the road ahead. Matching the right patient to the right data requires almost heroic efforts across an extremely disparate healthcare network and is the cornerstone of any viable health information exchange (HIE). Here are our top 5 issues that the new HIMSS/HHS “Innovator in Residence” must address:

1. Cost – Any new patient identification and data matching initiative will likely involve assessing the potential financial impact to healthcare facilities since any solution will most likely involve incorporating accurate matching algorithms into certified EHRs plus making changes to fields that capture soon to be standardized patient identifying attributes. With the recent changes that the HITECH Act and Meaningful Use requirements brought to the industry and the amount of dollars already shelled out for health IT, investment weary healthcare providers may balk at any solution that requires additional funds allocated to EHR resources to completely replace a system.

The Office of the National Coordinator for Health Information Technology (ONC) recently released results from a study on developing an open source algorithm “to test the accuracy of their patient matching algorithms or be utilized by vendors that do not currently have patient matching capabilities built into their systems.” Their results indicated:

“During the environmental scan, many indicated that replacing their current systems would be cost prohibitive. As such, it is not suggested that a standardized patient matching algorithm be developed or required. In a more limited way, however, there is value in developing an open source algorithm or updating and supporting an existing open source algorithm that EHR vendors may choose to utilize in their products.”

2. Patient buy-in and accountability – As noble as the healthcare industry’s efforts to establish more accurate patient identification and data matching standards, the entire initiative is moot unless the new Innovator in Residence forges best practices and policies to encourage patients to keep their demographic information up-to-date and accurate. The new Innovator in Residence would be wise to capitalize on the patient engagement momentum spurred by Meaningful Use Stage 2 and extend the patient engagement initiative to include patient accountability for demographic information accuracy. Without patient buy-in and involvement, the industry can’t reasonably expect any worthwhile patient identification and data matching initiative to lift it’s wheels of the ground.

3. Technology – Incorporating non-traditional data attributes to improve patient matching is a great example of a “wish list” item by industry advocates pushing for stricter patient identification and data matching but currently, most EHR systems do not support the collection of this information in a standardized field format. Any legitimate effort to standardize patient identifiers and substantially increase data matching will most likely require new technologies or modifications of existing ones to meet these goals. On the surface, requests to add demographic fields to existing EHR interfaces or incorporate standardized deterministic or probabilistic algorithms may seem like small changes that don’t require a lot of effort, but in reality even the simplest of changes require health IT vendors to make significant investments in upgrading or completely replacing existing technology.

4. Rekindling the national patient identifier debate – Did you know that it’s been 14 years since Congress placed a moratorium on funding research and implementation of a national patient identifier (NPI)? 14 years. Sure to be rekindled as a debate topic that closely coincides with the industry’s push to standardize patient demographic data, the idea of establishing a NPI needs to be addressed now and the new Innovator in Residence should be standing behind the healthcare industry podium leading the discussion. Sure, there are lingering questions on the privacy and security implications of creating a NPI, issues surrounding who will manage and have access to any databases created, but ultimately the topic deserves to be put back on the table and expectations are that the Innovator in Residence will spearhead the efforts. Many people believe that an NPI is no different than the plethora of other personal identifiers we deal with in our everyday lives – social security numbers, employee IDs, and driver’s licenses numbers just to name a few. Why should the NPI be treated any differently? We surmise that the new Innovator in Residence will have to address a NPI sooner rather than later.

5. The validity of health information exchanges (HIEs) – Although there are myriad reasons to develop HIEs, the bottom line is that their existence is meant to facilitate the fluid exchange of health information between disparate systems in order to improve individual and population health. What often seems to often be left out in the conversation about HIEs is the introduction of a foolproof patient identification technology that can uniquely tie together a patient with their electronic health record in a standardized data format to help ensure high levels of data integrity. After all, what good is developing an integrated HIE without a back end patient identification system that prevents the creation of duplicate medical records and overlays?

The new HIMSS/HHS Innovator in Residence faces some tough challenges to help tie together and incorporate a nationwide patient identification and data matching initiatives. What points would you add to our list that are critical for this new position to address?

January #biometricchat Summary – Privacy and Biometrics with Special Guest Shaun Dakin

January's biometric tweet chat will discuss privacy and biometrics.

January #biometricchat discussed privacy and biometrics.

On Thursday January the 10th we hosted the first #biometricchat tweet chat of 2013. The topic was biometrics and privacy and our guest was Shaun Dakin, a privacy expert and the man responsible for establishing the National Political Do Not Call Registry as well as writing several op-ed pieces including this one for the Washngton Post calling for a Privacy Bill of Rights for voters. Shaun was gracious enough to lend his time for our chat to discuss his thoughts on the current issues that privacy advocates are concerned about and his opinions on biometric technology’s affect on privacy.

For a copy of the Storify chat transcript please click here.

Here is a list of the questions that we asked Shaun during the chat:

  1. Can you bring us up to speed on what privacy advocates currently feel is the most pressing privacy issue of our time? What technology has the most disastrous impact on privacy?
  2. Is privacy primarily a cultural, contractual, or technological issue?
  3. What is the more appropriate, effective, and desirable approach – educating the public on privacy or lobbying the government to pass laws protecting it?
  4. Ireland recommends using a “Privacy by Design” ( system which encourages proactively embedding privacy design into biometric technology. Should this type of approach be used in the U.S.?
  5. Does a “Privacy Impact Assessment” (see carry any weight with advocates as a necessary tool in constructing a privacy friendly biometrics identification solution?
  6. On the scale of existing threats to privacy, where does biometrics fit in and what steps can the biometrics industry take to promote and encourage privacy friendly solutions?
  7. Biometrics is often viewed as a “privacy protector” in that it can prevent identity fraud, which is becoming an epidemic of global proportions. Do you agree or disagree with this statement?

Shaun felt that the biggest privacy story of 2012 was the increasing power of the government to search electronic communications directly stemming from the David Petraeus CIA scandal case. He went on to say that privacy seems to be a generational phenomenon, and younger generations very willing to give up their privacy in exchange for something else. he expected to see privacy norms stretched way beyond where we are and government surveillance slowly is becoming the privacy hot topic of our times.

Shaun went on to say that he believes there should be some sort of baseline privacy legislation in the U.S. “with teeth.” He also reminded us that in the last session of Congress there were over 21 pieces of privacy legislation introduced but none of them passed. Shaun agreed that the “Privacy by Design” concept which proactively encourages embedding privacy designs into biometric systems is a good idea on paper, but tough to implement in reality. He pointed out that most developers don’t think about privacy as a necessary step for design, instead they justifiably place their focus on revenue and number of users.

We rounded out the chat discussion by getting Shaun’s thoughts on where biometrics stood on the scale of existing threats to privacy and he said that currently biometrics is still not top of mind with the public but with the recent announcement by Disney that they would be using RFID bracelets stored with personal information in their parks, perhaps public conscience may change plus this may be an opportune time for Disney as a major brand to affect some change in the industry.

Please join us in thanking Shaun for his time on the chat  on privacy and biometrics and for everyone who participated! Look for the announcement of February’s #biometricchat topic sometime in the next couple of weeks.

Please drop us a note at if you have an idea for a topic. Thank you!


Privacy and Biometrics — Where are we in 2013? January’s #biometricchat Explores the Issues

January's biometric tweet chat will discuss privacy and biometrics.

January #biometricchat to discuss privacy and biometrics.

When: January 10, 2013 11:00 am EST, 8:00 am PST, 16:00 pm BST, 17:00 pm (CEST), 23:00 pm (SGT), 0:00 (JST)

Where: (hashtag #biometricchat)

What: Tweet chat on iris biometrics technology with Shaun Dakin (@ShaunDakin, @PrivacyCamp), Data Privacy Advocate and Founder of #privchat

Topics: What technologies have negative impacts on privacy, how the privacy industry works for change, privacy and biometrics, effectiveness of “privacy by design” and “privacy impact assessments,” biometrics as a “privacy protector,” and more

It almost seems impossible to engage in conversation about biometric technology without broaching the subject of the technology’s impact on individual privacy rights. With good reason, many people have concerns about personal data that is collected on them with or without their knowledge and how that data is used and stored. Biometrics often acts as a lightning rod for modern discussions on where technology and privacy collide, and what rights (or perceived lack thereof) we have as individuals to control who knows exactly what about us.

There are many who believe that preventing the use of biometric technology in any capacity is the only way to guarantee that individual privacy and civil liberty rights are maintained. Others believe that in the bigger picture, biometrics is a key technology to prevent terrorist attacks, promote global security, create efficiency, and increase convenience. Where do you stand on the issues?

Some may recall that we discussed privacy and biometrics at the inaugural #biometricchat in October of 2011. For more information on that chat, please click here. Join us on January 10th from 11 a.m. to 12 p.m. EDT as we discuss the issues and explore the impact of biometric technology on privacy.

Just in case you are interested in participating but are new to Tweet chats, please read this post which outlines the instructions and procedures. We hope that you will join us for the discussion, and please help us to spread the word among your colleagues and friends.

Do you have any questions about privacy and biometrics that you would like to ask Shaun? Just drop us an email at and we will try and include them in the chat.

Thanks, and hope to see you next Thursday, January the 10th at 11 a.m. EDT for the #biometricchat tweet chat!

M2SYS Launches White Paper Library on Web site

M2SYS releases a library of White papers on biometric technology

M2SYS White Papers

Recently, we launched a new page on our Web site for current and future M2SYS biometric research White papers written on a variety of topics. Separated by tabs that categorize the White papers based on the vertical markets to which they apply, currently you can find the following research:

1. Patient Misidentification in Healthcare  – “Eliminate Patient Fraud and Increase Patient Identification Accuracy with Vascular and Iris Recognition Biometric Identification Technology” – this White paper examines the growing concern of medical identity theft and patient misidentification, measuring the negative impact they have on patient care and how healthcare facilities can use palm vein and iris recognition biometric technology to correctly identify patients.


2. Retail Point of Sale (POS)/Workforce Mangement“Eliminating Time Theft and Increasing Profits with PC-Based Biometrics” – This White paper details the effect that time theft, manual labor tracking methods, and non-compliance can have on employee productivity and the corporate bottom line. It then studies how PC-based biometric identification technology is a smart solution to halting these productivity and profit killers and why companies should consider incorporating biometrics for employee identification.


3. U.S. Biometrics“The Perception of Biometrics in the United States” (co-written by Ravi Das from – Biometric technology is quickly being adopted across the globe for a multitude of purposes ranging from border security to voter registration to benefit entitlement parity. Despite the wide scale adoption of biometrics in other countries, it has been slow to catch on here in the U.S. This White paper studies theories as to why biometrics has not been embraced in this country, fear about how biometrics affects privacy and civil liberties, what steps biometric vendors can take to educate the public on the technology and a conclusion explaining what can possibly be done to increase U.S. adoption rates.


4. Global Biometrics – (White paper forthcoming) – Due to be released within the next month, this White paper will focus on future applications and growth areas of biometric technology as seen through the eyes of biometric vendors from all over the world.

We hope that you enjoy our collection of White papers and welcome any comments or feedback on the content. Have a suggestion for a White paper topic? Let us know in the comments section below.

Face Recognition: Improved Benefit? Or Erosion of Privacy?

Is facial recognition intrusive in our society?

Facial recognition

The following is a guest post from Carl Gohringer, founder of Allevate Limited (

A Surveillance Society?

I sat in Heathrow waiting for an early morning departure for a business trip. Sipping my coffee, I look casually around trying to spot the cameras. They’re cleverly hidden. Am I being watched? Doubtful. Am I being recorded? Almost certainly.

This is a daily fact of life for most Londoners. It’s widely known that our city is one of the most heavily recorded in the world; a fact that is consistently debated and often criticized. Yet for all the discussion, the fact remains. We don’t like it, but we accept it. Why? Personally, my true dislike is more of the necessity of this fact rather than the fact itself.

Carol Midgley wrote an excellent opinion piece (The Times, Sat 27th August, 2011) entitled “I’ll pick Big Brother over a hoody every time”. I recommend a read. Though clearly biased, and seemingly designed to stoke the debate with anti-CCTV campaigners, her conclusion was simple: In the wake of the London riots, the privacy-versus-necessity debate of CCTV is now all but dead. Do I agree? Let me come back to this.

Face Recognition and CCTV

Enter Biometrics. Face recognition technology to be precise. This technology, along with the wider field of video analytics, is set to transform CCTV surveillance. Video analytics is arguably a nascent technology, but face recognition on the other hand is here. Ready to deploy. Now. A recent study by the US National Institute of Standards and Technology (NIST) demonstrated that the accuracy achieved by the first place vendor (NEC) can provide clear and measurable benefits to a range of applications, including surveillance.

It seems that every new technology brings a realisation of new benefits and efficiencies, countered by a plethora of malicious uses of the technology by the less desirable elements of our global society, quickly followed by counter-measures and protections. This is a saga that we are all already familiar with in our daily lives. Examples range from the severe and extreme of nuclear medicine versus atomic weapons, through to online credit-card shopping versus financial identity theft. I’ve recently had a credit card used for over £3,500 of illegal transactions. Though this incident was highly inconvenient and disruptive to my life, I did not hesitate to accept a replacement card. Not to do so would have unacceptably disenfranchised me from modern society.

Back to face recognition. It hasn’t taken long for business minded technology companies to devise a whole range of new uses of this technology, all focused on delivering bottom line business benefit. Almost as quickly arrive the cries of the privacy advocates. I’ve been reading with interest the sudden explosion in main stream news over the past few months highlighting new uses of face recognition, while very carefully considering the concerns vociferously raised by the technology’s opponents. A key fact often cited is that the technology is not 100% accurate. Even an excellent identification rate of 97% can produce a significant number of false identifications and / or missed identifications in a large sample population.

Let’s take a look at some examples.

Public Safety and Policing

While I sat here in the terminal waiting for my flight, I’ve already grudgingly accepted that images of me sipping my coffee are almost undoubtedly being recorded. I may not be aware, however, that when I passed through security my photograph was taken. This wasn’t immediately obvious or openly advertised, but it happened. Shortly, my photograph will be taken again when I board my aircraft and compared to the photograph taken at security. International and domestic passengers share a common departure area, and this is done to ensure boarding cards aren’t swapped, thereby potentially enabling an international passenger to transit through to a domestic airport and bypass immigration controls. On a 1:1 verification, false matches are very low. If I’m a legitimate passenger, my concern is that the two photographs do not match, for which the worst case scenario is inconvenience.

Perhaps the borders agency is also comparing my photograph against a known watchlist of suspect individuals. This nature of deployment is usually used to enhance existing procedures, and not replace them. The system will provide increased security, in turn further protecting my safety while flying. I’m OK with this. Of course, there is also the prospect of misidentifying benign travellers. Though unavoidable, as long as the number of false matches are kept sufficiently low to ensure the cost of dealing with these exceptions doesn’t obliterate the benefit realised from the system, it can be argued that the greater good justifies the inconvenience faced by the occasional innocent passenger while their true identity is verified.

Upon my arrival at my destination, I may very well be offered the opportunity to use my new e-passport to speed through immigration at one of the many shiny automatic e-Gates springing into operation. In the early stages these definitely were a great benefit, allowing me to march past the long queues of travellers and expedite my passage through the airport. No complaint from me. As long as false matches are lower than what is achieved by a live border guard (which many studies suggest they are), then security should be improved. And false matches only apply to illegal passengers travelling on a false or stolen passport. Exceptions generated by valid travellers who do not match with their passport will generate some inconvenience by necessitating they speak to a live border guard. As e-gates become more commonplace, I predict I’ll just be queuing in front of an automatic barrier instead of a manned immigration booth. However, the efficiencies achieved should enable the border guards to concentrate on more intelligence-led activities, rather than simple rote inspection of passports, thereby increasing security and putting my taxes to more efficient use.

As I move through the airport, or for that matter in any public location such as a stadium or railway station, law enforcement authorities may be using my captured image to search against a database of suspects. Does this trouble me? Let’s look at a couple of scenarios.

I’m already being recorded. If I were to commit a crime, then it is likely that the video would be retrieved and officers would try to identify me. This is already happening and I doubt anybody would argue that this is an invasion of privacy. If face recognition technology can assist them with this arduous and tedious task, perhaps by automatically trying to match my face against databases of known offenders, and saving countless hours of police time, I’m all for it. Too bad for the criminal.

(I was incensed by the meaningless violence and destruction demonstrated during the recent riots in London. Newspaper reports have indicated that the UK’s police will be examining CCTV footage for years to come in their efforts to bring the perpetrators to justice. I am absolutely in favour of anything that can be done to expedite this process and save police time.)

But as a law-abiding citizen carrying on with my own business, how do I feel about having my face automatically captured and compared against a watchlist database of “individuals of interest”? There is potential to cause disruption to an individual’s life or place them under undue suspicion if they are falsely identified. That my face is being actively processed rather than just recorded gives more cause to pause and consider.

Having done this, I am prepared to accept this use case, if the technology is operating at a sufficient level of accuracy to ensure that the chances of being misidentified while conducting my daily activities remains low. I also expect the technology to be deployed wisely in situations where there is demonstrable benefit to public safety, such as at transport hubs, large gatherings, public events or areas of critical national infrastructure.

Most people already accept that the reality of the world today necessitates certain infringements on our liberties. The introduction of technology is a key tool in the fight against crime. No system is perfect, and the potential for an undesirable outcome of a system should not always result in the abolishment of that system. Few would argue, for example, to abolish our judicial systems and close our prisons to eliminate the possibility of a miscarriage of justice. Similarly, the benefits to public safety from face recognition are too great to ignore, though we must continuously strive to minimise the false identifications.

I agree with Ms. Midgley on this one.

Commercial Applications

Most criticism that I have been reading in the press in the past view months appears to be levelled at the widening application of face recognition in business related or commercial applications, not with public safety.

My flight is about to board, so let’s continue my journey through the terminal. As I saunter to my gate, my attention is caught by an impressive advertising display; a multi-plasma video wall. It was the amazing technology that caught my attention rather than the advert itself. Just as I’m about to glance away, the sunlit beach and blue ocean depicting the under 30’s surfing holiday fades away, to be replaced by a two-for-one spectacle offer, followed by a distinguished gentleman telling me how easy it was for him to “wash that grey away”.

As I self-consciously stroke the hair at my temples, I wonder: Was this a mere co-incidence? Multiple vendors delivering solutions for advertising have announced technology that can count the number of people watching an advert at any given time, and even estimate their age, dwell time, sex and race. While providing invaluable information for the advertiser, it can also allow them to dynamically change the adverts in real time to more appropriately target the demographic of the current viewer(s). Recent reports in the Los Angeles Times (21st August 2011) suggests that this is already widely deployed in Japan, and is being considered by the likes of Adidas and Kraft in the UK and the US.

While this is not technically face recognition, it is still worth noting as much of what I have been reading has been lumping the two technologies together. The key consideration here is that this form of technology is not actually identifying anybody, or extracting personally identifiable information. This doesn’t bother me in the least. Businesses have always tried to use whatever edge they can to more tightly tailor their message to their customer’s specific needs and wants. It may even benefit me by alerting me to more relevant products or services.

What if, on the other hand, the advertiser had negotiated an arrangement with another organisation, for example a social networking site such as Facebook. If they supplied them with an image of my face, along with information on which portion of the advert caught my attention, Facebook might be able to identify me from its database of photographs, enabling them to harvest valuable information about me. While I can see this would present a huge commercial advantage to them, and whomever they chose to sell this information on to, I can only hope that the commercial damage from the backlash of incensed users would outweigh the gain.

If I have some leisure time while on my business trip, there will doubtlessly be many activities at my destination to occupy me. I may have a quiet drink in a bar, or perhaps take a punt at the tables in the local casino. And yes, face recognition technology is being used even in these places. It’s been reported that bars and clubs are using gender and age distinguishing cameras to count people in and out, and make this information available over mobile phone apps. The youth of today can now determine before they set out which establishment holds their best chance of success. While I am well beyond having any use for this particular application, I can see how this may catch on in certain demographics of society. Any reputable establishment should clearly display such technology is in use and should make no attempt to harvest or make available any personally identifying information. Are all establishments reputable?

More concerning to me is the increasing use of face recognition by social network sites. Both Google and Facebook are actively exploring uses. Automatic tagging of photographs being uploaded to Facebook is already occurring. Being inadvertently photographed while on my business trip and automatically tagged when the photographer uploads it does not appeal to me, no matter how innocuous my activities at the time may happen to be.

Recent studies published by Carnegie Melon University demonstrating the potential to use large databases of photographs on social networking sites to glean confidential information should also be a cause for concern. The younger generation of today appear more and more willing to share intimate and private details online, without any thought (in my view) of the longer term or wider ramifications of doing so. This is an issue that is much larger than face recognition, but I can understand the worry that face recognition can help to tie it all together.

Improved Benefit or Erosion of Privacy?

When I first entered the biometrics field, I was attracted by the “neatness” factor of the technology, and of the potential for it to deliver benefits to society. I have to admit I paid scant attention to privacy concerns. Over time, as the voices of privacy advocates grew louder and more numerous, I started to listen and then to actively seek out their opinions. I am still a firm believer in this amazing technology, and endeavour to play an active role in its application for the positive transformation of society. However, I am grateful for the messages and insight provided by these campaigners; they have definitely transformed my thinking, and have made me consider much more carefully the application of biometrics.

From a law-enforcement and public safety viewpoint, face recognition holds great potential to increase the security of our society. By its very nature, our government holds power over us and our society, which is why it is our responsibility to choose our governments carefully. We have no choice but to hold a certain level of trust and faith in our law-enforcement organisations. Our society today contains more checks and balances than ever before, and our politicians our more in-tune with and responsive to the public mood. If this faith breaks down, then so does society.

In commercial applications, I also believe there is the potential for significant benefit to be realised from face recognition to both the consumer and businesses, but I am more concerned about the potential for abuse. To a certain level, the market will decide if the application of the technology is appropriate or not. Ventures people don’t like will fail. However we cannot always rely on market forces, and it is our collective responsibility to speak out when the need arises. Though it often lags behind, over time legislation keeps up with the advancement of technology. As our society changes with technical innovation, so too will the rules we collectively decide to govern our society. We will settle into an equilibrium reflecting the needs and views of all. But there will be a learning curve, and we will make mistakes along the way. That’s how society works.

So, does face recognition represent an improved benefit, or an erosion of privacy? I suggest it has the potential to be both. It is everybody’s responsibility to ensure the benefit is worth the price paid. I absolutely believe we must have both the proponents of this technology and the advocators of privacy; we all have a role to play to decide how face recognition will be applied over time.

The abolishment of either the technology or the voices of those monitoring its use and advocating our privacy would be to the detriment of society.

Final Thought

Just before I board my flight, let me leave you with this final thought. Imagine for a moment that a loved one of yours has come to harm. The authorities can use face recognition to aide in their recovery, and / or to ensure that justice is done. Are you concerned with privacy?

As founder of Allevate Limited (, Carl’s focuses on the promotion and marketing of large-scale and global identification infrastructure projects using biometric technology.  

Do Employees Have a Right to Refuse Enrollment in a #Biometric System?

biometric time clockBiometrics is a Growing Identification Technology 

It’s no secret that biometric technology deployments are on the rise.  Increasingly, retailers are catching on to the unique benefits and security that biometric technology offers to positively identify an individual by their physiological characteristics instead of through ID cards, personal identification numbers or passwords.  The rapid growth of biometric technology seemed to begin shortly after we shifted into a society aggressively focused on safety and security in the wake of the rise in global terrorism.  Biometrics was soon recognized as the only technology that could tell with near absolute certainty that someone was who they claimed to be.  Governments were the first to actively use biometric identification to secure their intellectual and physical property and then slowly expanded to border control and public safety.

employee's rights to enroll in a biometric identification system

Federal Courthouse

The progression of biometric technology didn’t stop solely with security deployments though; it kept on growing and progressing.  As price points dropped and the technology became more refined, deployments began to shift to the private sector as companies took notice that biometrics had strong potential to help them with problems like employee time theft, inventory shrink, identity theft, compliance and fraud.  Widespread adoption by the private sector fueled the growth of biometric systems designed to positively identify individuals to prevent these problems and with this growth came increased scrutiny of the technology (specifically how individual biometric data was stored and what it may be used for other than identification) by Privacy advocates and proponents of civil liberty protection.  Their feelings are that biometric technology violates individual privacy without a 100% guarantee that templates are safely stored and unable to be stolen and governments are not using the data to track citizens interacting with a system and subsequently disseminating the information collected to external bodies.

These arguments are strong but perhaps a closer look at how the technology works would help uncover some answers to these concerns and clear up some misconceptions about biometric technology.

The Privacy Issue – How Does Biometric Technology Actually Work?

Most people believe that when an individual places their finger on a fingerprint reader to register their identity in a biometric system, an image of their fingerprint(s) is stored somewhere on a server or a computer.  In actuality this is typically not the case.  Instead, the biometric matching software extracts and stores what is known as an identity template.  This is a mathematical representation of data points that a biometric algorithm extracts from the scanned fingerprint.  The biometric identity template is simply a binary data file, a series of zeros and ones.  The algorithm then uses the template to positively identify an individual during subsequent fingerprint scans.  No image is ever stored or transmitted across a network.  In addition, the algorithm is “one way” which means that it is nearly impossible to recreate the original biometric image from the template.  In other words, it is nearly impossible to reverse engineer the data that is sent to positively identify an individual and successfully “steal” their biometric identity.

Understanding these processes is central to realizing how the danger of identity theft or a security breach is significantly lessened, if not completely eliminated, through the use of a proprietary algorithm with no stored image and data encryption.  Biometric templates are also not linked to anything in a closed system that can positively identify an individual outside of that system.

However, privacy advocates strongly feel that the idea of capture, storage and use of biometric data (specifically by governments either through mandated deployments for social services/social issues or request of data and records from private business) to assemble a comprehensive citizen knowledge base and thus exercise covert control of society in general is a violation of individual privacy and proves to be a valid point.

fingerprint readerCan an Employee Claim that using Biometric Technology is a Violation of Their Privacy?

If you adopt biometric technology for time and attendance, access control or another deployment within a business, do employees have a right to refuse participation on the grounds that it violates their privacy and/or individual civil liberties?   It brings up an interesting question.  Without irrefutable proof that a biometric database can’t be hacked into and the templates reverse engineered into images, if an employee did decide to decline participation, would they be able to prove their claim that the technology did in fact violate their civil liberties?

There have not been any known cases here in the U.S. of an employee taking their employer to court for their refusal to enroll in a biometric identification system that resulted in wrongful termination or a violation of their equal opportunity rights.  However, shouldn’t biometric information be treated as any other personally identifiable data that an employer keeps on file like social security numbers, pictures, or bank information if you request a direct deposit?  Information that, if stolen, could be used to recreate you as a person?  Most companies already have policies in place that govern the safe protection of this data and biometrics should arguably be included and not treated any differently.  It should be treated the same way as the data you have already given up and is stored just by being an employee of the company.

Most employers also monitor their employee’s activities while they are at work which could include video, email and telephone monitoring.  An employee is then asked to sign that they received and read the employee manual that explicitly states their acknowledgement that they will be monitored throughout their employment tenure.  Remember that this is not a request for permission to be monitored; it is an agreement that the employer will be doing it.

It is also important to note if you have a Twitter or Facebook account, purchase on the Internet, use credit cards at brick and mortar establishments, subscribe to publications on the Internet, have any form of insurance or bank account, etc. you no longer have any privacy. If you use one or more credit cards, the credit card company knows where you eat, what you eat, what kind of car you drive, where you live, what insurance you have, where you spend your vacations, what you read, how much you spend on shoes and more.  If you use most social media platforms, you have publicly given up every bit of privacy you ever had.    Although these are personal preferences, it makes the argument hard to justify that enrollment in a biometric system is any more egregious that most of the other daily online and offline activities that we participate in.

biometric time clock

M2SYS Guest Blog Post on Privacy and Developing a More Thorough Understanding of Biometric Technology

"Biometrics erases privacy"

Does biometric technology erase privacy?

M2SYS was given the opportunity to write a guest blog post on developing a more thorough understanding of biometrics to help address some of the concerns that privacy advocates have about using the technology.  It was a response to a recent guest post on biometric privacy concerns by James Baker, political consultant for NO2ID in the UK).

Here is a link to the post:

Thank you to James for allowing us to present our opinions and perspective on the subject and we hope to augment the existing research efforts on biometrics and privacy to help bridge the gaps that exist between the industry and privacy advocates.