Summary: TikTok’s new US joint venture has updated its privacy policy to allow the collection of precise location data (depending on user settings). That sounds like a minor wording change, but it’s strategically important because TikTok is simultaneously being reorganised under a US-focused structure designed to address national‑security concerns about data access and algorithm influence.
In other words, TikTok is trying to reassure policymakers that US data is protected—while also expanding what data it can collect. That tension is the story.
What changed (the concrete facts)
From the BBC report:
- The updated US privacy policy says TikTok may collect precise location data, depending on settings.
- Previously, policy language referred to “approximate” location collection.
- TikTok has not said exactly when the new option will roll out to US users.
- The report says precise location sharing in the US is expected to be optional and off by default, with users asked to opt in.
- TikTok already collects location signals from SIM and/or IP address.
- Similar “nearby” style location collection already exists for some users in the UK and Europe.
Why “precise location” is sensitive (and why it’s different from IP/SIM)
Approximate location (IP/SIM) often places you at city or neighbourhood scale.
“Precise” location generally implies GPS-level accuracy, which can infer:
- home and work addresses
- daily routines and commute patterns
- visits to sensitive locations (clinics, schools, religious sites)
- co-location patterns (“who is near whom, when”)
Even if the feature is opt-in, it increases the platform’s ability to profile users if uptake is significant—or if UX nudges encourage enabling it.
Opt-in isn’t a silver bullet: the UX question
Many privacy debates come down to one practical issue:
Is the consent prompt truly informed and frictionful enough to reflect real choice?
A meaningful opt‑in is:
- clearly explained
- reversible
- not pressured by constant nags
A weak opt‑in is:
- buried in a confusing prompt
- presented as required for normal functionality
- “dark-patterned” into acceptance
So a key thing to watch is the exact copy and design of the prompt when it appears in-app.
Why TikTok wants location: product, ads, and “nearby” features
Location data can power:
- local content discovery (“what’s happening near you”)
- event and business recommendations
- local ad targeting
- safety features (fraud detection, spam reduction)
The BBC report references TikTok’s “Nearby Feed” feature in the UK/Europe. This is the strongest product rationale: people want local relevance. The privacy risk is that “local relevance” can be achieved with far less precision than GPS, depending on design choices.
The US restructuring context: why politics is never far away
The policy change comes after a deal that created a US joint venture to run TikTok’s US operations.
Key points in the report:
- The new entity (TikTok USDS Joint Venture LLC) includes major investors.
- Oracle is a central infrastructure partner.
- ByteDance retains a minority stake just under 20%.
- The joint venture claims it will secure US user data and the algorithm via privacy and cybersecurity measures.
This matters because TikTok’s “data narrative” in the US is not purely consumer privacy; it’s also about geopolitical trust.
The algorithm angle: why data governance is tied to recommendation systems
TikTok’s recommendation algorithm is the company’s competitive advantage.
The report says Oracle will oversee retraining the recommendation system on US user data and that it will be secured in Oracle’s US cloud environment.
Even if the technical plan is solid, policymakers will ask:
- who can access training data?
- who can influence the model update cycle?
- what oversight exists for changes?
Because in a political environment, the algorithm becomes not only a product system but a perceived influence system.
AI tools and data: the other expansion in the policy
The updated policy also expands what TikTok may collect about user interactions with its AI tools:
- prompts/questions
- metadata about when/where/how AI content was created
This matters because prompts are often personal. People paste:
- private context
- drafts of sensitive messages
- information about work or relationships
So the combination of more AI features + more data collection creates a second privacy surface beyond location.
A practical privacy checklist for users
If you’re deciding whether to enable location, a simple checklist:
- Do you need the feature (nearby feed, local discovery), or is it optional?
- Is it “while using” only, or “always” access?
- Can you keep “precise” off but still use “approximate”?
- Does TikTok provide a clear dashboard to see what it stored?
For many users, the best compromise is:
- keep precise location off
- allow location only when actively using the app (if needed)
What to watch next (signals that this becomes a bigger issue)
-
The opt‑in design: is it truly optional, off by default, and not repeatedly nudged?
-
Feature linkage: are core experiences tied to enabling location?
-
Regulatory scrutiny: lawmakers may treat location collection as a stress test of TikTok’s “safer US model.”
-
Transparency: clearer disclosures, retention policies, and user controls reduce backlash.
-
Security incidents: any breach or misuse of location data escalates the issue immediately.
Bottom line
TikTok’s move is not inherently sinister—many apps collect location—but it arrives at a sensitive moment.
TikTok is trying to prove it can operate as a trusted US platform. Expanding location and AI prompt collection could undermine that trust unless the company is unusually transparent and conservative in how it ships the features.
Sources
- BBC News (Technology): https://www.bbc.co.uk/news/articles/cvgnj7v2rr5o?at_medium=RSS&at_campaign=rss