Inside Singapore’s vision for data and privacy

By Yun Xuan Poon

Interview with Yeong Zee Kin, Assistant Chief Executive (Data Innovation and Protection), Infocomm Media Development Authority.

Just about a year ago, social scientist Shoshana Zuboff’s ‘The Age of Surveillance Capitalism’ hit the shelves. In December, Barack Obama named it one of his favourite books in 2019.

Zuboff’s book exposes the mass collection and monetisation of private data by tech companies, and the dangers this poses to society. “People are crying out for an alternative path to the digital future, one that will fulfil our needs without compromising our privacy,” she writes in the Financial Times.

Yet the data economy is where many countries have hedged their future. At a global conference in Barcelona last year, the Singapore Government announced plans to become a global hub for developing artificial intelligence. GovInsider caught up with Yeong Zee Kin, Assistant Chief Executive of Data Innovation and Protection, Infocomm Media Development Authority on how it will address the risks and rewards of mass data sharing.
 

AI and data protection


Singapore’s data protection requires companies to inform customers of what personal data is collected for and how it will be used. It has a separate policy setting out how companies should disclose the use of AI in their services.

Regulating data protection has become more complex in the face of new issues, Yeong says. “A box-checking compliance-only approach towards the handling of personal data is increasingly impractical and insufficient to keep pace with the developments in data processing activities.”

The rise of tech like AI and IoT has made collecting consent more complicated. “As big data is used in AI technologies to train machine learning models, it will be increasingly difficult for organisations to meet the personal data protection requirements of, say, consent, if personal data is included in the dataset,” notes Yeong. In the case of IoT sensors in public spaces, there is often no user interface to get consent before data is recorded, he adds.

Another issue in the data used for AI is the risk of poisoned data, which can lead to ineffective or even malicious AI systems. France has recognised this as a priority in their national cybersecurity strategy. “When using data to train machine learning models, it is important to have an awareness of the potential limitations of data such as accuracy, cleanliness, timeliness, relevance, reliability, representativeness, etc.” advises Yeong.
 

Dynamic consent


Yeong notes that data protection policies will have to be reviewed continually as we understand tech like IoT and AI better. Consent is one of the areas that need rethinking. “Unless we update our concepts and interpretations, old concepts like express consent will limit our ability to re-use data even when there is no adverse impact on individuals,” he says.

Singapore’s law requires organisations to ask customers for their consent when collecting their personal data, as well as to explain what the data will be used for. But in some cases, companies may not be able to state all the ways in which the data collected would be used in the future. “Our work in promoting dynamic consent becomes relevant at this stage, if consent has to be obtained,” says Yeong.

Organisations can seek dynamic consent by sending pop-up notifications to users just before their data is shared, or setting up a dashboard for users to decide how much of their personal information they want companies to be able to share, IMDA’s policy says.

“There could also be instances where consent is not desirable or appropriate, such as for fraud or money laundering detection or security threats,” Yeong notes. Datasets that do not include personally identifiable information can also be shared safely without consent, he explains. He says that Singapore is currently reviewing its data protection act to outline situations in which consent is not required.
 

Building trust and accountability


All innovation comes along with risk, since data is being shared in a way that has never been done before. “Our approach is to emphasize a shift to accountability for personal data protection”, Yeong says.

Before data is shared in a new way, organisations are recommended to carry out data protection impact assessments. Companies should measure the risk associated with collecting and sharing data, and use this to come up with mitigation measures before sharing customer data, explains Yeong.

Understanding risk also gives companies room to experiment with new tech while protecting customer data. For instance, Singapore uses regulatory sandboxes, where companies may explore new data sharing methods with looser rules within a controlled environment, to understand the privacy implications of novel tech.

Developing a network of “trusted data” is a key aim in Singapore’s data protection guidelines. For instance, Singapore also recognises organisations who manage customer data responsibly through trust certificates. This will “help organisations strengthen trust with the public and provide greater assurance to their customers”, says Yeong.

Singapore will also amend its Personal Data Protection Act to mandate that organisations have to notify the Personal Data Protection Commission and affected customers in the event of a data breach, says Yeong. This aligns with a report released last November detailing how the public sector can better manage citizen data. The report called for a standardised response procedure to data incidents in government. It also highlighted the importance of “developing a culture of open reporting of all types of data incidents” within government so all incidents can be accounted for.

The conversation on data protection is unavoidable in Singapore’s journey towards becoming a data-driven nation. It will have to navigate a fine line between progress and privacy by pioneering new approaches.

This article was amended to address factual errors about the scope of Singapore’s PDPA and Model AI Governance Framework.