The digital world holds no mysteries for younger generations. Yet the ease with which they navigate it can be deceptive: understanding how technology makes us vulnerable is key to learning how to protect our data and privacy.

The myths of the digital native and the multitasker

We live under the illusion that the new generations, having grown up in environments where modern digital devices are readily available, are ‘digital natives’ and have mastered this technological environment seamlessly. However, the ease with which they surf the internet does not equate to a critical understanding and deep knowledge of the risks of the digital world. Trust in networks, exposure to manipulated content and the free sharing of personal data invite us to debunk this myth and rethink young people’s relationship with technology.

  • Studies such as the one published in Teaching and Teacher Education confirm that young people consume more news through social networks than other age groups and trust these sources more (27% compared to 17% overall) because of where they come from giving them a no-doubt belief. This apparent contradiction is explained by contextual factors beyond age, such as the type of use of such platforms. Moreover, it perpetuates the myth of the ‘multitasking’ young person, which in reality masks a simultaneous use of platforms that often reduces attention span and critical thinking. Instead of mastering the web, many young people simply adapt to its dizzying rhythms, without questioning its implications.

This critical view is complemented by reflections such in the article “El alto precio de lo gratuito” by our researcher David Arroyo, reinforces this view by showing how digital gratuity often implies a transfer of data. Both texts warn about the role of algorithms and social networks in the propagation of disinformation and the creation of information bubbles. Against this backdrop, they propose adopting a ‘zero trust’ mentality: questioning platforms, protecting data and assuming privacy as an ethical duty in a context of algorithmic surveillance.

In this context, even those who ‘swim like fish in water’ in the digital world rarely stop to learn how to do so safely. Here are six everyday situations in which digital security is compromised and which can be avoided with a little attention and good habits:

  • Beware of pre-installed apps: Many devices, especially Android devices, come with apps that collect personal data without the user noticing. It is advisable to delete or deactivate those that arenot necessary, always read the terms of use, and consider system options that are more respectful of privacy.
  • Hygiene in the use of email: It is essential to check the origin of emails, avoid opening suspicious links or attachments, and use temporary accounts for one-off registrations. In addition, it is recommended to use secure password managers and maintain active vigilance against possible breaches. This approach is part of the zero trust security model.
  • Protect your browsing and privacy: Using well-configured and privacy-oriented browsers helps to prevent our browsing habits from being used to train AI systems or to manipulate us through advertising or disinformation campaigns. Privacy protection is an individual responsibility in the digital age.
  • Never trust, always verify: Even if a website has HTTPS and a valid certificate, this does not guarantee that the information is trustworthy. It is good practice to use content verification tools, save screenshots of pages and monitor changes with tools such as Wayback Machine or UpdateScanner.
  • Automatic and encrypted backups: It is essential to have automatic backups of your important data. Although many systems offer free cloud storage, such as Google Drive, you should consider encrypted alternatives or local solutions such as NAS systems to maintain greater control.
  • Use end-to-end encryption (E2E): Make sure to enable E2E encryption in messaging and backup services, as it ensures that only the sender and receiver can access the content. Not all apps offer this by default. Also, be critical of discourses that try to discredit encryption, as protecting privacy should not be seen as an obstacle, but as a fundamental right.

Privacy and manipulation in the online ecosystem

According to data from 2025, Android is the primary operating system used by those accessing the internet from a mobile device. Like other operating systems, it provides access to a good volume of pre-installed free apps, and that is the kit in question, ‘The high price of free’ is not just a catchphrase: as David Arroyo warns, every pre-installed app, every click on ‘Accept terms and conditions’ without reading them, opens a door to invasive data collection practices. The digital ecosystem sells us immediacy and freebies, but charges us with personal information and browsing patterns, used to segment, persuade or even manipulate users. As already raised in the study by Papadopoulos et al. (2017), the business model based on programmatic advertising allows advertisers to pay significant amounts to reach certain profiles, which demonstrates how our data are marketed as products.

In addition, many young people do not perceive the need to protect their accounts, emails or digital identities as relevant. The lack of habits such as the use of password managers or temporary email addresses highlights a false sense of security

Symbolic manipulation and propaganda strategies in digital environments

Social networks have become a veritable symbolic battleground, especially among young people. In the well-known era of ‘the culture of immediacy’ – according to Tomlinson (2004) – it shapes not only how we consume information, but also how we are manipulated. The speed with which content is distributed prevents critical reflection, reinforcing cognitive biases in an environment where virality takes precedence over truth.

Through memes, videos and viral campaigns, digital manipulation is normalised. Examples such as the phrase ‘The best place to hide a body is page two of Google search results’ or satirical images of Putin controlling social networks show the extent to which humour and irony can be used to disguise propaganda techniques. Even portarits like image of Greta Thunberg teaching about climate change, which shows the symbolic charge with which opinion is constructed on the internet, used like as an instrument of indoctrination.

This manipulation is also fuelled by our own data. Artificial intelligence algorithms, trained on our browsing habits, personalise not only advertisements, but also the news and content we consume. This generates a ‘bubble effect’, reinforcing our beliefs and isolating us from contrary views. As Arroyo warns in “The high price of free”, by not protecting our data we are unwittingly participating in a bubble effect, reinforcing our beliefs and isolating us from opposing views.

Artificial intelligence and the rise of automated disinformation

AI doesn’t just personalise content: it also generates it. With increasingly accessible tools, it is possible to create fake images of protests, simulated attacks such as the fictitious explosion of the Kio Towers, or videos in which politicians say things they never said. Deepfakes have gone from being entertainment to digital weapons capable of impersonating identities, manipulating emotions or altering the course of an election campaign. Their destabilising potential is enormous.

Even more alarming is their use in serious crimes. In a recent documented case, a network that generated images of child abuse using AI was discovered. This situation raises serious dilemmas about current legislation, which in Spain still has loopholes compared to other European countries. It also raises questions about the ethical and technological limits of online content surveillance.

Between surveillance, rights and protection measures

This increasing automation of surveillance also has psychological effects. According to recent data, 80% of children do not feel comfortable exploring their identity or sexuality if they perceive they are being digitally monitored. The tension between security and freedom is intensified by proposalssuch as #ChatControl, an EU initiative to monitor private communications for illicit material.

Although it seeks to protect minors, experts warn that it could lead to indiscriminate mass surveillance, with false positives, errors or even abuse. The possibility of automated systems reading our most personal messages has generated intense debate about the balance between privacy and protection.

In this context, the Zero Trust model is particularly relevant. It assumes that no device or platform should be considered secure by default. Therefore, practices such as using only essential applications, encrypting our communications, rotating passwords and using secure managers are recommended. As Arroyo reminds us, protecting privacy is now a personal and unavoidable responsibility in an increasingly intrusive digital environment.

Privacy as a duty and not just a right

The myth of the ‘digital native’ has installed the dangerous idea that young people have mastered technology simply because they have grown up with it. However, being fluent in navigation does not equate to critically understanding and protecting oneself from the risks of the digital environment. Constant exposure to social networks, blind trust in manipulable content and a lack of cybersecurity habits reveal a structural vulnerability. In an era of artificial intelligence, mass surveillance and symbolic manipulation, privacy can no longer be a luxury or a passive option: it is an ethical and collective duty. Adopting a ‘zero trust’ mindset, fostering critical thinking and strengthening our digital practices is no longer optional, but essential to exercise responsible digital citizenship and protect our fundamental rights.