Think about the last app you downloaded. Did you read every word of the associated privacy policy? If so, did you fully understand it?
If you said "no" to either of these questions, you are not alone. Only 6% of Australians claim to read all the privacy policies that apply to them.
Don't blame yourself too much, though. Privacy policies are often long - sometimes up to 90,000 words - and hard to understand. And there may be hundreds that apply to the average internet user ( one for each website, app, device, or even car you use ).
Regular reviews are also required. In 2023, for example, Elon Musk's X updated its privacy policy to include the possibility of collecting biometric data.
For these reasons, some privacy scholars have argued that it's nearly impossible for us to properly manage how our personal data are collected and used online.
But even though it might be hard to imagine, we can regain control over our data . Here are three possible reforms to online privacy policies that could help.
1. Visuals-based privacy policies
One way to shorten privacy policies is by replacing some text with visuals.
Recently, the Australian bank Bankwest developed a visual-style terms and conditions policy to explain one of its products. A consulting engineering company also used visuals in its employment contract .
There is evidence that suggests this promotes transparency and helps users understand the contents of a policy.
Could visuals work with online privacy policies? I think companies should try. Visuals could not only shorten online privacy policies, but also make them more intelligible.
2. Automated consent
Adding visuals won't solve all the problems with privacy policies, as there would still be too many to go through. Another idea is to automate consent. This essentially means getting software to consent for us.
One example of this software, currently being developed at Carnegie Melon University in the United States, is personalised privacy assistants . The software promises to:
learn our preferences and help us more effectively manage our privacy settings across a wide range of devices and environments without the need for frequent interruption.
In the future, instead of reading through hundreds of polices, you might simply configure your privacy settings once and then leave the accepting or rejecting of polices up to software.
The software could raise any red flags and make sure that your personal data are being collected and used only in ways that align with your preferences.
The technology does, however, raise a series of ethical and legal issues that will need to be wrestled with before widespread adoption.
For example, who would be liable if the software made a mistake and shared your data in a way that harmed you? Furthermore, privacy assistants would need their own privacy policies. Could users easily review them, and also track or review decisions the assistants made, in a way that was not overwhelming?
3. Ethics review
These techniques may have limited success, however, if the privacy policies themselves fail to offer user choices or are deceptive.
A recent study found that some of the top fertility apps had deceptive privacy policies. And in 2022, the Federal Court of Australia fined Google for misleading people about how it used personal data.
To help address this, privacy policies could be subject to ethical review , in much the same way that researchers must have their work reviewed by ethics committees before they are permitted to conduct research.
If a policy was found to be misleading, lacked transparency, or simply failed to offer users meaningful options, then it would fail to get approval.
Would this really work? And who would be included in the ethics committee? Further, why would companies subject their policies to external review, if they were not required to do so by law?
These are difficult questions to answer. But companies who did subject their polices to review could build trust with users.
Testing the alternatives
In 2024, Choice revealed that several prominent car brands, such as Tesla, Kia, and Hyundai, collect people's driving data and sell it to third-party companies. Many people who drove these cars were not aware of this.
How might the above ideas help?
First, if privacy polices had visuals, data collection and use practices could be explained to users in easier-to-understand ways.
Second, if automated consent software was being used, and users had a choice, the sharing of such driving data could be blocked in advance, without users even having to read the policy, if that was what they preferred. Ideally, users could pre-configure their privacy preferences, and the software could do the rest. For example, automated consent software could indicate to companies that users do not give consent for their driving data to be sold for advertising purposes.
Third, an ethics review committee may suggest that users should be given a choice about whether to share driving data, and that the policy should be transparent and easy to understand.
Benefits of being transparent
Recent reforms to privacy laws in Australia are a good start. These reforms promise to give Australians a legal right to take action over serious privacy violations, and have a greater focus on protecting children online.
But many of the ways of empowering users will require companies to go beyond what is legally required.
One of the biggest challenges will be motivating companies to want to change.
It is important to keep in mind there are benefits of being transparent with users. It can help build trust and reputation. And in an era where consumers have become more privacy conscious, here lies an opportunity for companies to get ahead of the game.