Compare commits

...

No commits in common. "sctpub" and "mhttp" have entirely different histories.

2 changed files with 173 additions and 170 deletions

173
mhttp.md Normal file
View File

@ -0,0 +1,173 @@
# Mobile Hypertext Transfer Protocol, version 1.1
## Introduction
Mobile Hypertext Transfer Protocol (MHTTP) is a protocol that helps speed up the web and
protect it from DDoS attacks, while providing strong security guarantees. It's heavily
based on HTTPS, and also uses SCTP where available.
The "Mobile" in the name refers to mobile phones.
The MHTTP version tracks the corresponding HTTP version it's built on. As such, there's no
MHTTP/1.0 or MHTTP/0.9 as there are no versions of MHTTP built on those HTTP versions.
This protocol builds on:
- SCTP
- TLS Application-Layer Protocol Negotiation (ALPN)
- A new type of TLS certificate
- HTTPS/1.1 (HTTP/1.1 over TLS)
This protocol shall define the TLS ALPN protocol ID `"mhttp/1.1"`.
This protocol also makes use of the TLS ALPN protocol ID `"http/1.1"`.
## New certificate types (and new certificate authority types)
Two new kinds of certificates shall be created:
- \[TBD], the public-facing certificate type in mhttp/1.1.
- \[TBD], used by CAs to sign the above.
They have to be added so as to prevent Let's Encrypt from being able to issue them at the moment.
This is a temporary measure and should be revised once certificate providers adjust to this new
protocol.
## The Network Model
This protocol is built upon the following network model:
client <--------> CDN <--------> server
Currently, existing protocols are used to provide CDN services like so:
client <--HTTPS-> CDN <--HTTP--> server
## Securing communications between client and server
The MHTTP protocol attempts to secure communication between client and server through the whole
path. It puts both the client and the server in control, as it should be, where the existing
options put the CDN in control.
To be able to do this, it needs to cover a few different scenarios.
### Backwards compatibility with HTTPS, TCP, client certificates.
For backwards compatibility with HTTPS, TCP, client certificates, the following section applies,
but for all HTTP methods.
### Client-to-server (private) data
The most basic scenario is when sending "private" data. This is used when modifying server data.
This includes the following HTTP methods:
- POST
- PUT
- DELETE
- PATCH
- \[TBD]
In this scenario, the client may use a TCP or SCTP connection and must use the `"http/1.1"`
protocol. The CDN must pass this stream through unchanged, but may choose to restrict the client's
bandwidth and/or connection limits. The CDN must use SCTP for any connections to the server.
This scenario should be used whenever the client would make a request using one of the above HTTP
methods.
### Client-to-CDN, cached data
In most cases, the client wants to request data from the \[server/CDN] instead. In MHTTP/1.1, the
client must use an SCTP connection, and request the `"mhttp/1.1"` and `"http/1.1"` protocols, in
that order, in descending order of preference.
If a CDN exists and supports MHTTP/1.1, it must negotiate a TLS stream with the client using the
CDN certificate mentioned in section 2.
MHTTP/1.1 requests look just like HTTP/1.1 requests, but with some key differences:
- POST, PUT, DELETE, PATCH, \[TBD] are not supported.
- Cookies are not supported.
- "MHTTP/1.1" is used instead of "HTTP/1.1".
- \[TBD]
The CDN must cache based on at least:
- Resource Path
- Accept/Content-Type header
- Accept-Language header
If the CDN has the response in its cache, it should just send that response out, without hitting
the server.
The CDN should not choose to send the same response for different Accept headers, and should take
preference into account if doing so. For example, if the first request contains the header:
Accept: */*
And the response contains:
Content-Type: text/html
But the second request contains:
Accept: application/xhtml+xml, text/html; q=0.5
Then the CDN should hit the server for an `application/xhtml+xml`. If one is found, a subsequent
request with `Accept: */*` may retrieve an `application/xhtml+xml` instead of `text/html`.
If the CDN does not have the response in its cache, the next scenario applies.
### Client-to-CDN, uncached data
If the CDN does not have the response in its cache, it must open a stream to the server, using the
`"mhttp/1.1"` protocol and the CDN certificate as the TLS client certificate.
The CDN must then repeat the exact same request it was given, as in plain HTTP/1.1. \[TODO:
specify how to identify the client to the server] The server may then choose to send the CDN a
response. However, the response has some key differences from HTTP/1.1.
First, the response starts with the server's certificate.
Second, the response must contain the server's domain name and the resource path. We don't rely on
the server certificate's domain name because it may be valid for multiple domain names.
Third, the response must contain the cache control headers:
- `Expires`
- \[TBD]
It should also contain additional cache control headers for use by the CDN and client.
Fourth, the response (domain, path, headers and body) must be signed by the provided certificate's
key (server's private key). The certificate itself is not included in this signature.
If the CDN provides an expired response to the client, as defined by the `Expires` header, or a
response for another domain or resource path, the client must warn the user and discard the
response. To account for network latency, the CDN may choose to expire the response a few seconds
or a few minutes before the true expiry date.
If the server chooses not to send the CDN a response, it can do so in the next scenario.
### CDN-mediated client-to-server, (true) private data
If the server doesn't want the CDN to cache a GET/\[TBD] request, it can open a new SCTP stream
for sending the data directly to the client.
If the server opens an SCTP stream to the CDN instead of responding to a request, the CDN must
forward this new stream as a new stream to the client. The client should then start a TLS
handshake with the server, using the `"http/1.1"` protocol, and run a normal HTTP request with
all normal HTTP headers, including cookies. See the backwards compatibility section for details.
(This requires SCTP because TCP-based alternatives would either require the CDN to signal a
connection closed, which could be caused by another network factor not involving the CDN, and thus
unnecessarily increase server load as well as reducing the reliability of the backwards
compatibility mechanisms, or it'd involve a reverse TCP connection and we already know how that
went for FTP.)
## Use by ISPs - transparent proxies
The CDN CA store is separate from the main CA store, and can only be used to sign CDN certs or
other CDN CA certs. This means it's fairly safe to put arbitrary certs in it. ISPs may use this
to provide transparent HTTPS proxies, in which case they act like a CDN as defined above, except
they do not send their cert upstream, instead relying on their capabilities as an ISP and using
the client's real IP address for the request.

170
sctpub.md
View File

@ -1,170 +0,0 @@
# SCTPub
Hi! So this is me trying to explain my plans with SCTPub (ActivityPub over SCTP).
And remember: this is *my* idea, and I don't expect ppl will agree with it, and that's okay! I don't like TLS, doesn't mean I
refuse to use it.
First, I'll talk about the choice of SCTP, and the choice of authentication and authorization mechanisms (which are actually
tightly coupled).
## Why SCTP?
SCTP is a little-known protocol, so the idea was to use it to make it more popular and to experiment with it.
SCTP is in theory capable of seamless proxying such that you can talk to the proxy at the same time as you talk to the target,
but nothing seems to use this feature. It would still be target-controlled and signed, but the proxy would have some control
over caching. For sensitive requests, they would go straight through to the server, without the proxy being able to read it.
This puts the proxy in a position of a semi-trusted middleware. Technically all routers on the internet need to be semi-trusted
middleware, as any of those routers can track you and target you (e.g. with ads) based on that - see:
[ALTER (LTE) attack](https://alter-attack.net/), which explains how a middleware can track (fingerprint) the pages you visit.
Since we already need to somewhat trust the middleware to be able to do anything, we could use that semi-trust state to provide
more efficient caching. This can be used to a) reduce our server load and b) protect against DDoS attempts. So I guess at this
point it'd no longer be "pure" SCTP as it'd involve MITM relations between the server and the middleware, but that's okay.
(I don't think this is the right place to go in-depth into this idea, tho, so I'll stop here. I like to believe mobile
operators would be happy with the possibility of downscaling their link sizes tho. This can even be used for private content
e.g. if you have a large chat group you just encrypt the large data (video, etc) and send the group a key, and the data gets
cached. Crypto is fun.)
## Words about Authentication
Registering on SCTPub would require an username and a password, as well as an email address and a display name (handle). The
username and password never get sent to the server, only the email address and the display name (handle). This means if the
server is MITMd or malicious or a phishing server is used, the user's identity (login information) is still protected.
The login interface:
```
+-------------------------------------+
| Login [x] |
| |
| Username: _____________ |
| Password: _____________ |
| Instance: _____________ |
+-------------------------------------+
```
The registration interface:
```
+-------------------------------------+
| Register [x] |
| |
| Username: _____________ |
| Password: _____________ |
| Instance: _____________ |
| Email: _____________ |
| Handle: _____________ |
+-------------------------------------+
```
The authentication is done using a public key mechanism. The Ed25519 private key is generated:
GO ASK ##crypto @ irc.freenode.net BECAUSE THEY INSIST THAT I'M NOT ALLOWED TO SPECIFY THIS STUFF WHEN TRYING TO MAKE A PORTABLE PROTOCOL
The user should be able to change their username/password (aka login key). The user may have multiple login keys.
Once authenticated, the key is to be replaced with a temporary, non-token-based (i.e. no cookies) session key.
All messages encrypted or signed by the client must include the instance address. All messages received by the instance must
check the instance address. Nonces must also be used but w/e you probably know this better than me. Encryption vs signing
should be chosen as needed depending on the requirements (see also *Why SCTP?* above).
## Words about Authorization
Authorization should also be done with cryptography, and not OAuth. OAuth is awful. Please don't use OAuth. In fact,
authorization should be done with public key *encryption*, not even signing. The keys are revokable. The same
mechanism that allows zero-knowledge authentication can also be used for authorization, as it already provides all of this.
Authorization doesn't need the session keys mechanism described above. In fact, that mechanism is a form of authorization.
User agents may optionally authenticate themselves using an UA key. In other words, messages would be double-signed or
double-encrypted, once by the authorization key, once by the UA key. The authorization key should always be the "inner" key
while the UA key should always be the "outer" key. (e.g. UA signatures also sign the authorization signature.)
This shouldn't be used by client-side UAs like mobile apps or web browsers.
- - -
Now that we have some of the security aspects out of the way, let's talk about the community aspects: moderation, interaction,
friendship, etc.
## Communities
One of the goals of SCTPub is to provide for *communities*, something we lost a long time ago with the downfall of forums.
Y'know, like phpBB and stuff.
As such SCTPub *actively encourages* you to have different accounts on different intances, by not only making it easy to do
so (with the standard User Agent providing an easy way to sign up to other instances), but also through "dark patterns"
(sorry). The main dark pattern is that while ppl can boost your content to other instances' main feeds (separate from your
self-curated feed, more similar to the mastodon "Local Timeline" but including all interactions from local users - boosts,
etc), but you can't post on other instances like this. Ppl from other instances can choose to follow you, and your content
shows up to them, but it's not the same as being in the community. Think like Reddit, but each instance is a different sub.
While there's no technical mechanism to encourage instances to have different rules, I do believe in different instances
having unique rules but still federating together. This would also support this "communities" model, and encourage multiple
accounts.
## Moderation
One of the things I miss a lot from the time we had forums is moderation. Back in the day, some forums, as elitist as it may
be, would require you to participate in a closed-off area before participating in the rest of the community. While I'm not
totally a fan of that idea, I do think a similar idea is warranted: participating locally before federating. This doesn't
(on its own) reduce spam on a local level, but it helps a lot on a federated level. The software *should* still allow
instances to opt into a pre-approval stage where you don't even interact with the locals, tho.
Additionally, to register on other instances, some metadata would be sent across servers. This metadata would be composed
of the originating instance, the target instance, and it would be signed by the user's login key. This metadata is then
used to build a chain of trust (web of trust?) of sorts. Also, because you have to sign it, you actually have full control
over which instances/accounts know about your other accounts.
Depending on the chain of trust model of each instance, you'd either have to go through the moderator-approval stage on the
other instance, or it'd trust you because N other instances trust you, or because N instances on its trusted instances list
trust you, or [...]. There's a lot of flexibility here.
This chain of trust can also be stored, and retrieved at login time, allowing for smooth login across many instances (see
Communities above). If one of your instances is down, that's no big deal - just login on another! I don't have many
accounts across the fediverse because of how hard it is to manage them, but this alone would make it a lot easier.
Moderation requests (e.g. reports) should definitely federate, altho I haven't worked out all of the details with this.
## Interactions
This is a tricky one to explain, because I'm not trying to make another Twitter or Facebook. So bear with me for a second.
Twitter and Facebook have an attention-based model. On Twitter and Facebook you're "supposed" to make posts that bring you
attention, like memes or shitposts or flame wars. Reddit also encourages this to a slightly lesser extent. This is NOT
something I want, even if they're "technically" interactions. (besides, [there's an xkcd for that](https://xkcd.com/1475/))
Instead, we should look at forums. In forums, there's no pressure to have the most shared content. Instead there's pressure
to have the most talked about content. That's what I want when I talk about interactions. I want ppl to talk to me, not
just share my posts. Social isolation is awful, and altho liking and sharing do have their place as forms of non-verbal
communication (and I'm not proposing we remove them), they just aren't always enough.
I want to bring back that cozy feel of forums, while taking it to the next level. The federated nature of the fediverse
allows for ppl to easily interact with eachother, which brings us one step closer to taking it to the next level.
(I could go on about this all day, but I'm really starving and I'd like to finish this ASAP so I can go eat. it's getting
hard to stay focused now.)
## Friendship
Friendship is important, so just allowing interactions across instances while providing a cozy feel isn't enough.
So there should be a way to keep up with your friends, and putting some emphasis on it. But I don't think it should be the
default.
Both Facebook and Twitter have this thing where you can follow ppl. And then you follow too many ppl and it just goes too
fast. You can still follow ppl, and it should be encouraged, and there should be lists like in mastodon (or how reddit used to
have multireddits?) so you can organize things better, but the local timeline, which should be cozy, should be the default.
(Big instances should be discouraged, while subject-specific/topical instances should be encouraged.)
- - -
Okay I'm too hungry to keep going. Cya another time tho. o/
Sorry that this was so long. I needed to write it down somewhere.
You can boost and send comments on the associated ActivityPub post: <https://cybre.space/@SoniEx2/101557574984189610>