When big data isn't big enough
From AI-driven insights to web3 innovations, how to not break your user's heart.
About ten years ago, there was a boom of tech companies pushing marketing messages about how they are “Powered by Big Data”. Without anyone really explaining what that term means for their customers or their own business, it just seemed like the thing to say if your product was collecting some significant amount of data.
Big data is a term used to describe data sets that are so large and complex that they cannot be analysed using traditional methods.
In 2023, big data is well and truly getting Bigggger. And we’re all “Powered by AI” ;)
With AI creating new content data (text, images, videos, etc) and also being able to analyse complex data sets, we’ve entered a new era of big data that is opening up new opportunities to link several data sources, identify patterns and make better predictions.
Ten years ago, data scientists and analysts were spending hours using descriptive statistics, data mining, and rolling their own models to crunch through big data, now we’re using AI to save on those hours to get ahead in applying the data insights to improve products, solve business problems and making even more sophisticated AI models.
As big data and AI become more powerful and widely used, it is important to consider the ethical implications of their use.
The Dual Nature of Data: Personalisation vs. Scandals
In 2018, the Cambridge Analytica data scandal came to light for how they used data mining and psychographics to create psychological profiles of voters and target them with political advertising that was designed to influence their vote. It highlighted how easily large amounts of personal data can be collected without people’s knowledge or consent and the need for stronger privacy laws and regulations.
5 years onward, with more consent in place for collecting and using data, I still feel concerned about more data scandals happening that are “Powered by AI”.
Everything has two sides to it: collecting, merging and analysing personal data and then generating “personalised” content could be a recipe for many more harmful scandals and at the same time the new “modern” recommendation for companies to empower their UX and improve personalisation of their products and services.
While we’re asking GPT to write poems and Silicon Valley startups are rapidly adding AI to every touchpoint on the internet, governments are figuring out how to regulate this new era of AI.
European Union is finalising a new Artificial Intelligence Act (AIA) designed to ensure that AI is developed and used in a way that is safe, ethical, and beneficial to society.
US congress hearing discussed AI with Sam Altman (OpenAI CEO), Christina Montgomery (IBM Vice President), and Gary Marcus (Professor of psychology and neural science at New York University) about AI’s potential impact on society to do good but also to highlight the importance of taking steps to mitigate negative harmful applications of AI.
Data Soveringhty with Web3
Web3 is still in its early stages of development, but it has the potential to spice up the data world. One of the key areas of web3 development is focused on data sovereignty – giving users more control over their data and improving privacy. New products and tools being built in web3 that use decentralised data storage, blockchains, NFT technology, encryption and decentralised identities could make a big practical impact on how data is used by businesses and AI.
For example, rather than companies storing personal information about an individual in a centralised data storage, the personal information is stored in a decentralised way where the individual has control and ownership over their data and apps ingest data from the user when they log in. This can reduce the privacy law liabilities on the business and improve privacy for the user.
Building Data Trust
In this age of big data, it’s more important than ever before for companies to build trust with users by being transparent about how they collect, use and share data. When users trust a business with their data, they are more likely to be loyal to that business because they feel confident that their data is safe and that it’s not used in a way that harms them.
Building data trust means being clear about what data is collected, how it is used, and who has access to it. It also means allowing users to control their data and opt out of data collection and sharing. While having an easy-to-read privacy policy is a good start, the practical way of achieving this level of data transparency is likely to be found in web3.
At Wunderbar Network, we’ve released ✨Mini Digital✨ a privacy-focused and web3-compatible product analytics tool that helps businesses collect user behaviour data from their web apps and services and then be transparent about what data is collected, why it is being collected and let your users review the data collected with Data Pockets feature. A data pocket is created between the business and the verified user, the data is encrypted and securely transferred to the data pocket where the user can decrypt it and review. (Follow us on Twitter @wunderbar_net & @MiniDigitalData for updates 👀)
So yeah. Big data and AI are here to stay and we shouldn’t avoid conversations about how we can use these technologies in a safe, ethical, and beneficial way. One way to do this is to build trust with users by being transparent about how we collect, use, and share their data and use web3 tech to empower data sovereignty.