Can anyone recommend a decent Android ROM that strips out as much of the spyware as possible? Is GrapheneOS a good option? I need to get a new phone anyway so I don’t mind buying within a supported device list as long as I can get one on the used market for $300-$400 or less.
If anyone could recommend some learning resources for this stuff I’d really appreciate it.
@sorenpeter@darch.dk All valid points. Maybe the correct way to do it should be to start a new feed at the new URL rather than move the feed and break all the hashes.
switch a couple of twt timestamps
The hashes would change and your posts would become detached from their replies. Clients might still have the old one cached, so you might just create a duplicate without replies depending on an observer’s client.
add in 3 different twts manually with the same time stamp
The existing hash system should be able to keep them separate as long as the content is different. I’m not sure if there are additional implementation-related caveats there.
@prologic@twtxt.net @bender@twtxt.net As someone who likes cryptocurrencies for their utility as money instead of an investment, I’m glad to see the hype train start to move on to the next thing.
@falsifian@www.falsifian.org @prologic@twtxt.net @sorenpeter@darch.dk @lyse@lyse.isobeef.org I think, maybe, the way forward here is to combine an unchanging feed identifier (e.g. a public key fingerprint) with a longer hash to create a “twt hash v2” spec. v1 hashes can continue to be used for old conversations depending on client support.
@sorenpeter@darch.dk That could work. There are a few things that jump out at me.
- Nicknames on twtxt have historically been set on the client end. The
nick
metadata field is an optional add-on to the spec. I’m not sure it should be in the reply tag because it could differ between clients.
- URLs are safer to use, and we use them in the hash currently, but they can still change and we’re back to square 1. Feeds ought to have some kind of persistent identifier for this reason, which is why we’ve been discussing cryptographic keys and tag URIs in the first place.
- The current twt hash spec mandates collapsing the timestamp to seconds precision. If those rules are kept, two posts made within the same second will not be separate when someone replies.
@falsifian@www.falsifian.org TLS won’t help you if you change your domain name. How will people know if it’s really you? Maybe that’s not the biggest problem for something with such low stakes as twtxt, but it’s a reasonable concern that could be solved using signatures from an unchanging cryptographic key.
This idea is the basis of Nostr. Notes can be posted to many relays and every note is signed with your private key. It doesn’t matter where you get the note from, your client can verify its authenticity. That way, relays don’t need to be trusted.
@falsifian@www.falsifian.org I agree completely about backwards compatibility.
@falsifian@www.falsifian.org tag:twtxt.net,2024-09-08:SHA256:23OiSfuPC4zT0lVh1Y+XKh+KjP59brhZfxFHIYZkbZs
? :)
Key rotation
Key rotation is useful for security reasons, but I don’t think it’s necessary here because it’s only used for verifying one’s identity. It’s no different (to me) than Nostr or a cryptocurrency. You change your key, you change your identity.
It makes maintaining a feed more complicated.
This is an additional step that you’d have to perform, but I definitely wouldn’t want to require it for compatibility reasons. I don’t see it as any more complicated than computing twt hashes for each post, which already requires you to have a non-trivial client application.
Instead, maybe…allow old urls to be rotated out?
That could absolutely work and might be a better solution than signatures.
HTTPS is supposed to do [verification] anyway.
TLS provides verification that nobody is tampering with or snooping on your connection to a server. It doesn’t, for example, verify that a file downloaded from server A is from the same entity as the one from server B.
feed locations [being] URLs gives some flexibility
It does give flexibility, but perhaps we should have made them URIs instead for even more flexibility. Then, you could use a tag URI, urn:uuid:*
, or a regular old URL if you wanted to. The spec seems to indicate that the url
tag should be a working URL that clients can use to find a copy of the feed, optionally at multiple locations. I’m not very familiar with IP{F,N}S but if it ensures you own an identifier forever and that identifier points to a current copy of your feed, it could be a great way to fix it on an individual basis without breaking any specs :)
My first thought when reading this was to go to my typical response and suggest we use Nostr instead of introducing cryptography to Twtxt. The more I thought about it, however, the more it made sense.
- It solves the problem elegantly, because the feed can move anywhere and the twt hashes will remain the same.
- It provides proof that a post is made by the same entity as another post.
- It doesn’t break existing clients.
- Everyone already has SSH on their machine, so anyone creating feeds manually could adopt this easily.
There are a couple of elephants in the room that we ought to talk about.
- Are SSH signatures standardized and are there robust software libraries that can handle them? We’ll need a library in at least Python and Go to provide verified feed support with the currently used clients.
- If we all implemented this, every twt hash would suddenly change and every conversation thread we’ve ever had would at least lose its opening post.
@prologic@twtxt.net It’s pretty hard, actually. There will either be more friction than people will accept (BitTorrent) or it won’t be decentralized in practice (LBRY/Odysee).
@bender@twtxt.net , do you depend on first-party Bluesky servers for the client application?