From joe at begriffs.com Sat May 2 04:29:48 2020 From: joe at begriffs.com (Joe Nelson) Date: Fri, 1 May 2020 23:29:48 -0500 Subject: Why so many servers? Message-ID: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> I've been thinking about remote pair programming, and am wondering why I'm conditioned to communicate through third-party servers on the internet? For instance, to do a voice chat I would typically set up a Murmur server and then I and my programming partner would connect our Mumble clients to it. Why don't our computers talk directly to one another instead? Same thing for screen sharing. A shared tmux session on the frostbyte server still requires the server. What if local code editors on two people's computers could do p2p collaborative editing? The closest I've seen is VSCode's Live Share [0] feature. Looks pretty cool, although I think it still hooks into a third-party Microsoft server. Maybe the root problem is that many people operate in a local area network that doesn't have NAT configured to expose a listening port from a machine to the outside world. I think we just got used to that situation, and even when we control our own home networks we don't take the time to configure the router. On my own Mikrotik router, port mapping [1] isn't very hard. (Or maybe people do this all the time and it's just me who is catching on...) For restrictive networks there appear to be two techniques to punch a hole through the router, and they both require a server in the outside world. The first is STUN, where it appears that the interested parties talk to the STUN server just to set up communication, and it determines what their public IP and ephemeral ports are and lets them take it from there. That technique won't work in some situations (which?), so there's another approach called TURN, where all communication goes through a relay. Then there's a test called ICE [2] that determines which technique you need to use. As an interesting experiment, I'm wondering if anyone wants to try ICE/STUN with me and see if we can open a direct TCP connection between our computers? We could do a basic chat using netcat [3]. We could use a pre-existing public STUN server [4]. (Or just configure our routers to set up the NAT.) As a final note, I believe that the Jitsi Meet instance [5] Jes shared uses WebRTC which establishes direct connections between chat participants through a web server. In an ideal world I wouldn't need any server, but could use a native chat application and "call" another person directly with their IP address. Anybody have suggestions for more cool p2p software? 0: https://visualstudio.microsoft.com/services/live-share/ 1: https://wiki.mikrotik.com/wiki/Manual:IP/Firewall/NAT#Port_mapping.2Fforwarding 2: https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment 3: http://man.openbsd.org/nc#CLIENT/SERVER_MODEL 4: stun.stunprotocol.org powered by http://www.stunprotocol.org/ 5: https://cafe.cyberia.club/ From ian at ianbicking.org Sat May 2 17:02:41 2020 From: ian at ianbicking.org (Ian Bicking) Date: Sat, 2 May 2020 12:02:41 -0500 Subject: Why so many servers? In-Reply-To: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> Message-ID: I've done a fair amount with WebRTC. It's not a great experience. Reliability of point-to-point communication is not awesome, and while you can get something that often works pretty easily, it's very hard to get something that works reliably enough to be a pleasant experience. The successful companies like Zoom have to take on the entire vertical stack to make things actually work. People who build p2p stuff on WebRTC also report very disappointing experiences, like you can get as far as a demo and no further. You also need TURN servers to make anything reliable, and hosting a proxying server is hard due to abuse. STUN doesn't accomplish much, and I don't think it handles all the hole punching and other modern things required to connect directly to another computer. It feels like what you want is the p2p sort of stuff, but most of it is not realtime, it's more storage-oriented. You might also find IndieWeb interesting: https://indieweb.org/ ? not exactly about this topic, but people that share some similar interest in DIY tech. Technically Urbit tries to create a p2p networking system based on stable but portable identities. The implementation is absurd, but I wish I knew who else exactly was trying it. I guess predictably the p2p community is fractured and hard to understand. On Fri, May 1, 2020 at 11:30 PM Joe Nelson wrote: > > I've been thinking about remote pair programming, and am wondering why > I'm conditioned to communicate through third-party servers on the > internet? For instance, to do a voice chat I would typically set up a > Murmur server and then I and my programming partner would connect our > Mumble clients to it. Why don't our computers talk directly to one > another instead? > > Same thing for screen sharing. A shared tmux session on the frostbyte > server still requires the server. What if local code editors on two > people's computers could do p2p collaborative editing? The closest I've > seen is VSCode's Live Share [0] feature. Looks pretty cool, although I > think it still hooks into a third-party Microsoft server. > > Maybe the root problem is that many people operate in a local area > network that doesn't have NAT configured to expose a listening port from > a machine to the outside world. I think we just got used to that > situation, and even when we control our own home networks we don't take > the time to configure the router. On my own Mikrotik router, port > mapping [1] isn't very hard. (Or maybe people do this all the time and > it's just me who is catching on...) > > For restrictive networks there appear to be two techniques to punch a > hole through the router, and they both require a server in the outside > world. The first is STUN, where it appears that the interested parties > talk to the STUN server just to set up communication, and it determines > what their public IP and ephemeral ports are and lets them take it from > there. That technique won't work in some situations (which?), so there's > another approach called TURN, where all communication goes through a > relay. Then there's a test called ICE [2] that determines which > technique you need to use. > > As an interesting experiment, I'm wondering if anyone wants to try > ICE/STUN with me and see if we can open a direct TCP connection between > our computers? We could do a basic chat using netcat [3]. We could use a > pre-existing public STUN server [4]. (Or just configure our routers to > set up the NAT.) > > As a final note, I believe that the Jitsi Meet instance [5] Jes shared > uses WebRTC which establishes direct connections between chat > participants through a web server. In an ideal world I wouldn't need any > server, but could use a native chat application and "call" another > person directly with their IP address. > > Anybody have suggestions for more cool p2p software? > > 0: https://visualstudio.microsoft.com/services/live-share/ > 1: https://wiki.mikrotik.com/wiki/Manual:IP/Firewall/NAT#Port_mapping.2Fforwarding > 2: https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment > 3: http://man.openbsd.org/nc#CLIENT/SERVER_MODEL > 4: stun.stunprotocol.org powered by http://www.stunprotocol.org/ > 5: https://cafe.cyberia.club/ -- Ian Bicking | http://ianbicking.org From pstelzig at gmx.com Sat May 2 17:46:30 2020 From: pstelzig at gmx.com (paul) Date: Sat, 02 May 2020 12:46:30 -0500 Subject: Why so many servers? In-Reply-To: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> Message-ID: <0cf75d543a4758f29bdec0dd18a8bd4b9584c498.camel@gmx.com> On Fri, 2020-05-01 at 23:29 -0500, Joe Nelson wrote: > > the time to configure the router. On my own Mikrotik router, port > mapping [1] isn't very hard. (Or maybe people do this all the time > and > Any advice on figuring out RouterOS, I've got a pair of MikroTik radios that I'm planning on using to replace my Ubiquiti link but I'm finding RouterOS a bit opaque to learn. From forest.n.johnson at gmail.com Sat May 2 18:41:24 2020 From: forest.n.johnson at gmail.com (Forest Johnson) Date: Sat, 2 May 2020 13:41:24 -0500 Subject: Why so many servers? In-Reply-To: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> Message-ID: On Fri, May 1, 2020 at 11:30 PM Joe Nelson wrote: > > I've been thinking about remote pair programming, and am wondering why > I'm conditioned to communicate through third-party servers on the > internet? For instance, to do a voice chat I would typically set up a > Murmur server and then I and my programming partner would connect our > Mumble clients to it. Why don't our computers talk directly to one > another instead? I think there are two reasons. First of all, we can't discount the corporate influence on technology and the internet. Servers are great for business. You can't make money on a peer to peer product very easily. Servers are also great for authority and having power over other people. I think servers are the most pervasive and profound power structures that exist in our society today. The second reason being, servers are how everything started, and it's simply legacy being carried forward out of habit, necessity, and laziness. > Same thing for screen sharing. A shared tmux session on the frostbyte > server still requires the server. What if local code editors on two > people's computers could do p2p collaborative editing? The closest I've > seen is VSCode's Live Share [0] feature. Looks pretty cool, although I > think it still hooks into a third-party Microsoft server. > > Maybe the root problem is that many people operate in a local area > network that doesn't have NAT configured to expose a listening port from > a machine to the outside world. I think we just got used to that > situation, and even when we control our own home networks we don't take > the time to configure the router. On my own Mikrotik router, port > mapping [1] isn't very hard. (Or maybe people do this all the time and > it's just me who is catching on...) I think most non-technical people don't know or care about port mapping. Port mapping is supported by pretty much every router but no product requires the user to use it. Because any product that requires the user to learn how to do a new thing will fail as soon as someone introduces a competing product which doesn't require that. So all of the p2p software that exists today (video games, voice and video chat, etc) uses a server to establish an initial connection between two peers, like WebRTC / STUN. > For restrictive networks there appear to be two techniques to punch a > hole through the router, and they both require a server in the outside > world. The first is STUN, where it appears that the interested parties > talk to the STUN server just to set up communication, and it determines > what their public IP and ephemeral ports are and lets them take it from > there. That technique won't work in some situations (which?), so there's > another approach called TURN, where all communication goes through a > relay. Then there's a test called ICE [2] that determines which > technique you need to use. Just to point out, this is not for restrictive networks, this is basically for all networks behind a NAT. Which is pretty much ***all networks** these days (except for public facing web servers.) > As an interesting experiment, I'm wondering if anyone wants to try > ICE/STUN with me and see if we can open a direct TCP connection between > our computers? We could do a basic chat using netcat [3]. We could use a > pre-existing public STUN server [4]. (Or just configure our routers to > set up the NAT.) I would do this! I was meaning to start developing and prototyping a simple p2p VPN soon. All of the the peer to peer VPNs that I have seen require one of the nodes to have a public IP address. I was going to see if I could build a peer to peer VPN that uses STUN and works even if all peers are behind a NAT. Getting a basic socket working would certainly be a requirement for that ! > As a final note, I believe that the Jitsi Meet instance [5] Jes shared > uses WebRTC which establishes direct connections between chat > participants through a web server. In an ideal world I wouldn't need any > server, but could use a native chat application and "call" another > person directly with their IP address. I don't think that's possible unless one person has a public IP (no NAT). There has to be some sort of session establishment mechanism, otherwise the router being connected to wont know which LAN address to forward the connect packet to. That doesn't mean there has to be a server, though. Some peer to peer software uses things like distributed hash tables, blockchains, etc to help coordinate connection establishment without a server. > Anybody have suggestions for more cool p2p software? I don't think you can talk about p2p in 2020 without talking about IPFS. I really like the ideas behind IPFS and I want to build software using it. To make it even better, the IPFS project made the decision to split a large part of their code base into a separate project called libp2p. Because IPFS has so much p2p-focused code, and that code is historically finicky / tricky to write, they decided to invest in creating a solid, modern library as a foundation for their application code. https://ipfs.io/ https://libp2p.io/ I was planning on using the go language version of libp2p to build my p2p signaling server and p2p VPN software. Also, while bitcoin is one of the most famous pieces of p2p software, I think its dusty and under-appreciated cousin namecoin is just as cool. I wrote a long blog post about how to use electrum-nmc to register a .bit domain name: https://sequentialread.com/how-to-register-a-namecoin-bit-domain-with-electrum-nmc/ They even fixed the main usability issue I highlighted in the article in the newest version of electrum-nmc! On Fri, May 1, 2020 at 11:30 PM Joe Nelson wrote: > > I've been thinking about remote pair programming, and am wondering why > I'm conditioned to communicate through third-party servers on the > internet? For instance, to do a voice chat I would typically set up a > Murmur server and then I and my programming partner would connect our > Mumble clients to it. Why don't our computers talk directly to one > another instead? > > Same thing for screen sharing. A shared tmux session on the frostbyte > server still requires the server. What if local code editors on two > people's computers could do p2p collaborative editing? The closest I've > seen is VSCode's Live Share [0] feature. Looks pretty cool, although I > think it still hooks into a third-party Microsoft server. > > Maybe the root problem is that many people operate in a local area > network that doesn't have NAT configured to expose a listening port from > a machine to the outside world. I think we just got used to that > situation, and even when we control our own home networks we don't take > the time to configure the router. On my own Mikrotik router, port > mapping [1] isn't very hard. (Or maybe people do this all the time and > it's just me who is catching on...) > > For restrictive networks there appear to be two techniques to punch a > hole through the router, and they both require a server in the outside > world. The first is STUN, where it appears that the interested parties > talk to the STUN server just to set up communication, and it determines > what their public IP and ephemeral ports are and lets them take it from > there. That technique won't work in some situations (which?), so there's > another approach called TURN, where all communication goes through a > relay. Then there's a test called ICE [2] that determines which > technique you need to use. > > As an interesting experiment, I'm wondering if anyone wants to try > ICE/STUN with me and see if we can open a direct TCP connection between > our computers? We could do a basic chat using netcat [3]. We could use a > pre-existing public STUN server [4]. (Or just configure our routers to > set up the NAT.) > > As a final note, I believe that the Jitsi Meet instance [5] Jes shared > uses WebRTC which establishes direct connections between chat > participants through a web server. In an ideal world I wouldn't need any > server, but could use a native chat application and "call" another > person directly with their IP address. > > Anybody have suggestions for more cool p2p software? > > 0: https://visualstudio.microsoft.com/services/live-share/ > 1: https://wiki.mikrotik.com/wiki/Manual:IP/Firewall/NAT#Port_mapping.2Fforwarding > 2: https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment > 3: http://man.openbsd.org/nc#CLIENT/SERVER_MODEL > 4: stun.stunprotocol.org powered by http://www.stunprotocol.org/ > 5: https://cafe.cyberia.club/ From joe at begriffs.com Sun May 3 05:38:13 2020 From: joe at begriffs.com (Joe Nelson) Date: Sun, 3 May 2020 00:38:13 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> Message-ID: <20200503053813.bjrsojbtqzqirb5a@begriffs.com> Ian Bicking wrote: > I've done a fair amount with WebRTC. It's not a great experience. > [...] It feels like what you want is the p2p sort of stuff, but most > of it is not realtime, it's more storage-oriented. Which chat program performs best in your experience? > You might also find IndieWeb interesting: https://indieweb.org/ ? not > exactly about this topic, but people that share some similar interest > in DIY tech. I like their message that people should self-host rather than getting put into a "silo." That said, I've also wasted time complying with some of their proposals like IndieAuth [0] or Microformats [1]. Nothing really came out of that work for me personally at least. 0: https://indieauth.com/auth?me=begriffs.com&client_id=https%3A%2F%2Findieauth.com&redirect_uri=https%3A%2F%2Findieauth.com%2Fsuccess 1: http://pin13.net/mf2/?url=begriffs.com > Technically Urbit tries to create a p2p networking system based on > stable but portable identities. The implementation is absurd, Urbit is totally weird. I visited their office a few times, and got a planet or whatever they call it (~fallyr-mogseb). A large part of the office is filled with a library of books in praise of monarchy, and the philosophical drive behind their technology is establishing some kind of digital feudalism. The address space is an intentionally limited resource, and the pitch to investors is that they can make an initial land grab. Not sure how the lords of address space control their serfs or whatever, but that's the draw. From ian at ianbicking.org Sun May 3 18:26:08 2020 From: ian at ianbicking.org (Ian Bicking) Date: Sun, 3 May 2020 13:26:08 -0500 Subject: Why so many servers? In-Reply-To: <20200503053813.bjrsojbtqzqirb5a@begriffs.com> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <20200503053813.bjrsojbtqzqirb5a@begriffs.com> Message-ID: On Sun, May 3, 2020 at 12:38 AM Joe Nelson wrote: > > Ian Bicking wrote: > > I've done a fair amount with WebRTC. It's not a great experience. > > [...] It feels like what you want is the p2p sort of stuff, but most > > of it is not realtime, it's more storage-oriented. > > Which chat program performs best in your experience? Realistically, top-to-bottom integrated software stacks perform best, like Facetime or Zoom. Then they can manage everything from the network to the servers, any services are controlled by the same authentication they use for the rest of the system. They still try to do p2p to improve service and save costs, but they have more visibility into failures and can adjust their heuristics accordingly. It feels like establishing a p2p connection is full of heuristics, there's no "right" way. > > You might also find IndieWeb interesting: https://indieweb.org/ ? not > > exactly about this topic, but people that share some similar interest > > in DIY tech. > > I like their message that people should self-host rather than getting > put into a "silo." That said, I've also wasted time complying with some > of their proposals like IndieAuth [0] or Microformats [1]. Nothing > really came out of that work for me personally at least. > > 0: https://indieauth.com/auth?me=begriffs.com&client_id=https%3A%2F%2Findieauth.com&redirect_uri=https%3A%2F%2Findieauth.com%2Fsuccess > 1: http://pin13.net/mf2/?url=begriffs.com Yeah... I get the message of IndieWeb stuff, but I guess I'm not surprised that it's not actually a very useful set of stuff :-/ > > Technically Urbit tries to create a p2p networking system based on > > stable but portable identities. The implementation is absurd, > > Urbit is totally weird. I visited their office a few times, and got a > planet or whatever they call it (~fallyr-mogseb). A large part of the > office is filled with a library of books in praise of monarchy, and the > philosophical drive behind their technology is establishing some kind of > digital feudalism. The address space is an intentionally limited > resource, and the pitch to investors is that they can make an initial > land grab. Not sure how the lords of address space control their serfs > or whatever, but that's the draw. Urbit's technology is also totally crap. I wonder if it's really a right wing crank welfare program. But the shape of what they are attempting isn't entirely off, maybe. I guess what you'd want is some kind of dynamic DNS, where you could publish your computer name (e.g., fallyr-mogseb), along with whatever the connection details are for that moment. But maybe that doesn't make sense ? the way you connect with another computer depends somewhat on where that other computer is. So it's a negotiation. The way a WebRTC system works is that negotiation generally happens through some central signaling server. But maybe what you'd put in your DNS is just the information to get things started, that is based on a centralized system. E.g., Firefox's Push (where a website can send a message to your browser) is done with a single WebSocket connection that is kept open. There's just no other particular good way to get things started except to maintain or poll a connection (though there are many not-good ways to get things started). So instead of an address published in DNS, you really need a service that is ready to send a message. Then you build up from that message to a connection. I feel like I've talked myself back into centralization, but I guess that's unsurprising. When I was doing simple realtime communication between browsers with TogetherJS we just had a 100-line server that echoed messages between browsers that connected with WebSockets, and it was reliable and hardly needed any changes. As long as you are doing soft-realtime (not A/V) it was also fast enough. -- Ian Bicking | http://ianbicking.org From joe at begriffs.com Mon May 4 22:58:45 2020 From: joe at begriffs.com (Joe Nelson) Date: Mon, 4 May 2020 17:58:45 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> Message-ID: <20200504225845.pockmhbad53uy2ee@begriffs.com> > > As an interesting experiment, I'm wondering if anyone wants to try > > ICE/STUN with me and see if we can open a direct TCP connection > > between our computers? Forest Johnson wrote: > I would do this! I was meaning to start developing and prototyping a > simple p2p VPN soon. Given this is something you're planning to work on, do you want to take the lead in this investigation? Just tell me what to do on my end and I'll help you debug. > > In an ideal world I wouldn't need any server, but could use a native > > chat application and "call" another person directly with their IP > > address. > > I don't think that's possible unless one person has a public IP (no > NAT). There has to be some sort of session establishment mechanism, > otherwise the router being connected to wont know which LAN address to > forward the connect packet to. My understanding is that you designate a certain port, and tell the router to forward packets hitting that port to a particular computer in the LAN. So the public IP that an outsider connects to would be the address of the router. > > Anybody have suggestions for more cool p2p software? > > I don't think you can talk about p2p in 2020 without talking about IPFS. > I really like the ideas behind IPFS and I want to build software using it. Decentralized content-addressable storage certainly goes back earlier than $CURRENT_YEAR. I remember Freenet from twenty years ago doing a similar thing. IPFS is probably more efficient though. https://freenetproject.org/ > To make it even better, the IPFS project made the decision to split > a large part of their code base into a separate project called libp2p. Nice tip. Libp2p has a good description of NAT traversal too. https://docs.libp2p.io/concepts/nat/ They do kind of use their own system though. Rather than STUN they use the "identity protocol," and rather than TURN they use the "circuit relay protocol." They also rely on the SO_REUSEPORT option to setsockopt() which isn't in POSIX. https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/sys_socket.h.html It is available in Linux and BSD though: https://lwn.net/Articles/542629/ https://man.openbsd.org/setsockopt The LWN article is interesting because they motivate why the option was created, and there's some pushback in the comments. From joe at begriffs.com Mon May 4 23:05:12 2020 From: joe at begriffs.com (Joe Nelson) Date: Mon, 4 May 2020 18:05:12 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <20200503053813.bjrsojbtqzqirb5a@begriffs.com> Message-ID: <20200504230512.ck7mvfvs6syt5fut@begriffs.com> > > Which chat program performs best in your experience? Ian Bicking wrote: > Realistically, top-to-bottom integrated software stacks perform best, > like Facetime or Zoom. Then they can manage everything from the > network to the servers, any services are controlled by the same > authentication they use for the rest of the system. I found a free comprehensive ebook about decentralized real-time communication, and yeah the setup looks tricky! https://rtcquickstart.org/ Looks like a bit of a hodgepodge to get it right. From forest.n.johnson at gmail.com Tue May 5 15:51:59 2020 From: forest.n.johnson at gmail.com (Forest Johnson) Date: Tue, 5 May 2020 10:51:59 -0500 Subject: Why so many servers? In-Reply-To: <20200504225845.pockmhbad53uy2ee@begriffs.com> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <20200504225845.pockmhbad53uy2ee@begriffs.com> Message-ID: On Mon, May 4, 2020 at 5:58 PM Joe Nelson wrote: > > > In an ideal world I wouldn't need any server, but could use a native > > > chat application and "call" another person directly with their IP > > > address. > > > > I don't think that's possible unless one person has a public IP (no > > NAT). There has to be some sort of session establishment mechanism, > > otherwise the router being connected to wont know which LAN address to > > forward the connect packet to. > > My understanding is that you designate a certain port, and tell the > router to forward packets hitting that port to a particular computer in > the LAN. So the public IP that an outsider connects to would be the > address of the router. Right, that basically counts as "having a public IP address". The problem is that only about 2% of users would be willing to even attempt this: the 10% most patient out of the 20% most tech-literate. I think 98% of people will simply move on and try a different program that doesn't require them to configure their network before they can use it. I was intentionally dis-regarding this possibility because I think it is not feasible for users. > > > Anybody have suggestions for more cool p2p software? > > > > I don't think you can talk about p2p in 2020 without talking about IPFS. > > I really like the ideas behind IPFS and I want to build software using it. > > Decentralized content-addressable storage certainly goes back earlier > than $CURRENT_YEAR. I remember Freenet from twenty years ago doing a > similar thing. IPFS is probably more efficient though. > https://freenetproject.org/ I'm not saying that the idea is brand new for 2020, just that its relevant to the discussion for 2020. And, I do think its kind of new, whats new is, people with money starting to get involved. There is already the cloudflare public IPFS HTTP gateway and I heard a rumor that a "browser maker" (mozilla) is working on IPFS native support for their browser. I don't think I ever heard of things like this happening with i2p, cjdns, bitmessage, freenet, urbit, etc. From j3s at c3f.net Wed May 6 20:54:56 2020 From: j3s at c3f.net (j3s) Date: Wed, 6 May 2020 15:54:56 -0500 Subject: Why so many servers? In-Reply-To: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> Message-ID: <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> Very late to this conversation, but I have some cents to add. On 5/1/20 11:29 PM, Joe Nelson wrote: > Why don't our computers talk directly to one > another instead? This, broadly speaking, was never supposed to be a problem until NAT rolled around and screwed everything up as Joe pointed out. NAT altered how we think about networks forever. Notably, one thing that hasn't been brought up is IPv6. IPv6 alleviates routing concerns by giving every device on the globe a publicly route-able address, with no NAT involved. The internet we were promised! But so far adoption has been slow, thus it's not very useful. But more and more carriers are leaning into IPv6, which will maybe, perhaps, lead to IPv4's deprecation in our lifetime. > Same thing for screen sharing. A shared tmux session on the frostbyte > server still requires the server > Maybe the root problem is that many people operate in a local area > network that doesn't have NAT configured to expose a listening port from a machine to the outside world. I do not think the root problem is NAT. I think an important and understated benefit of using a server is consistency - in terms of latency, throughput, and access. Let's pretend that IPv6 is universal and everyone has direct peer-to-peer access. Yaas! This should help alleviate our access problem. Or does it? - most ISPs block certain ports to prevent spam abuse - corpo inspection systems may still block "suspicious p2p" traffic - how do you know who to trust in a p2p network? - certificate injections may make authentic p2p certs "invalid looking" Therefore, some users can connect and some cannot. Those users may blame the software. Besides access, numerous complications are now introduced: - security becomes much more complex to enforce (who decides who is violating rules?) - proper sequencing requires some authority, which eliminates some benefit of p2p-centric scaling - latency is likely higher for everyone tl;dr: client/server models are typically more consistent, simple, accessible, stable, and low-latency than their p2p counterparts. This makes the lives of both developers and users simpler. Admittedly though, I am a fan of servers. :D j3s From drewbenson at netjack.com Thu May 7 00:34:17 2020 From: drewbenson at netjack.com (Andrew Benson) Date: Wed, 6 May 2020 19:34:17 -0500 Subject: Why so many servers? In-Reply-To: <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> Message-ID: Lol.. ?? which will maybe, perhaps, lead to IPv4?s deprecation in our lifetime? :-) I?m with you on everything but the latency statement. Whether p2p has better or worse latency than a client-server setup really depends on the situation. For example, if you pick the simplest possible situation ? 1 server in the middle, and 2 clients, all of which are singlely-connected to the public Internet, with the server essentially just forwarding data between the two. Like with two people on a video or audio call. Latency in this case will almost always be better with p2p in this case as you?re saving a server round-trip, and the server isn?t likely to be doing anything that will significantly reduce the amount of data flow between the two endpoints. For most home Internet users (and even many business users), their upload speed is highly limited, which will often destroy any advantage p2p may have had once there are multiple peers. That said, all the points are very valid, with NAT, ISP blocking, corporate blocking, and most firewalls getting in the way of ?incoming? connections, p2p is an implementation problem. > On May 6, 2020, at 3:54 PM, j3s wrote: > > Very late to this conversation, but I have some cents to add. > > On 5/1/20 11:29 PM, Joe Nelson wrote: >> Why don't our computers talk directly to one >> another instead? > > This, broadly speaking, was never supposed to be a problem until NAT rolled around and screwed everything up as Joe pointed out. NAT altered how we think about networks forever. Notably, one thing that hasn't been brought up is IPv6. > > IPv6 alleviates routing concerns by giving every device on the globe a publicly route-able address, with no NAT involved. The internet we were promised! But so far adoption has been slow, thus it's not very useful. But more and more carriers are leaning into IPv6, which will maybe, perhaps, lead to IPv4's deprecation in our lifetime. > >> Same thing for screen sharing. A shared tmux session on the frostbyte >> server still requires the server > > > Maybe the root problem is that many people operate in a local area > > network that doesn't have NAT configured to expose a listening port from a machine to the outside world. > > I do not think the root problem is NAT. > > I think an important and understated benefit of using a server is consistency - in terms of latency, throughput, and access. > > Let's pretend that IPv6 is universal and everyone has direct peer-to-peer access. Yaas! This should help alleviate our access problem. Or does it? > > - most ISPs block certain ports to prevent spam abuse > - corpo inspection systems may still block "suspicious p2p" traffic > - how do you know who to trust in a p2p network? > - certificate injections may make authentic p2p certs "invalid looking" > > Therefore, some users can connect and some cannot. Those users may blame the software. Besides access, numerous complications are now introduced: > > - security becomes much more complex to enforce (who decides who is violating rules?) > - proper sequencing requires some authority, which eliminates some benefit of p2p-centric scaling > - latency is likely higher for everyone > > > tl;dr: client/server models are typically more consistent, simple, accessible, stable, and low-latency than their p2p counterparts. > > This makes the lives of both developers and users simpler. > > Admittedly though, I am a fan of servers. :D > > > j3s From salo at saloits.com Wed May 6 21:24:08 2020 From: salo at saloits.com (Timothy J. Salo) Date: Wed, 6 May 2020 16:24:08 -0500 Subject: Why so many servers? In-Reply-To: <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> Message-ID: <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> On 5/6/2020 3:54 PM, j3s wrote: > On 5/1/20 11:29 PM, Joe Nelson wrote: >> Why don't our computers talk directly to one >> another instead? > > This, broadly speaking, was never supposed to be a problem until NAT > rolled around and screwed everything up as Joe pointed out. NAT altered > how we think about networks forever. Notably, one thing that hasn't been > brought up is IPv6. It's not just NATs, it's also firewalls. And, widely deployed firewalls are not just a fact of life, they are a good idea. Firewalls, by design, prevent the anything-to-anything connectivity that is being discussed here. We all have them, and we all benefit from them. So, today, even in the absence of NATs, people would have to poke holes in their firewalls to permit peer-to-peer connections (without the use of central servers). > IPv6 alleviates routing concerns by giving every device on the globe a > publicly route-able address, with no NAT involved. The internet we were > promised! But so far adoption has been slow, thus it's not very useful. > But more and more carriers are leaning into IPv6, which will maybe, > perhaps, lead to IPv4's deprecation in our lifetime. Except, in the face of firewalls, every device (including IPv6 devices) can't be reached globally (thank goodness). Home networks all have firewalls (as well as NATs). Large networks often (usually) have firewalls that prevent incoming connections to most machines, even though those machines have publicly routeable addresses. Anything else would be a administrative and security nightmare. So, if you want to have servers or serverless peer-to-peer connections on your home network, you need to have the sophistication to configure your firewalls and NATs. -tjs From dfeldman.mn at gmail.com Thu May 7 05:44:05 2020 From: dfeldman.mn at gmail.com (Daniel Feldman) Date: Thu, 7 May 2020 00:44:05 -0500 Subject: Why so many servers? In-Reply-To: <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> Message-ID: Way back in the mid-2000s, there was this idea that applications on the network could request holes in the firewall programmatically so they could do peer-to-peer connections through a protocol called UPnP. It never really took off, although your home router still probably has some version of it. Instead, STUN took off which is a hackier way to trick the firewall to allowing P2P connections and is still around. In 2020, overlay networks like Nebula, ZeroTier, and Tailscale are really taking off. They give every device its own address on an encrypted private network (similar to a VPN but with no central server), and you can run normal applications through TCP connections over that overlay network. This has a lot of similarities to P2P but is more practical for businesses. Something to think about. Daniel From hello at robertdherb.com Thu May 7 14:05:53 2020 From: hello at robertdherb.com (Robbie D) Date: Thu, 7 May 2020 09:05:53 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> Message-ID: <72fbb3c1-c6ed-25e6-12c8-28193089fb8a@robertdherb.com> On 5/7/2020 12:44 AM, Daniel Feldman wrote: > Way back in the mid-2000s, there was this idea that applications on > the network could request holes in the firewall programmatically so > they could do peer-to-peer connections through a protocol called UPnP. > It never really took off To be honest, I'm kind of glad it didn't. I've never seen an implementation that lets you do UPnP on a per-device basis, so you'd either let nothing through, or potentially let everything through. At that point, you have to REALLY trust that no device on your network is unsecure, and as we've seen from IoT devices, that's too much to ask of consumer devices. And even if there were per-device rules on UPnP... Well, then you're right back where you started, having to configure your router for each device you want to allow through. It's a tough problem to solve, and I think STUN took off because it's as easy (For the user) as UPnP, but doesn't punch holes in your PC. Of course, there is still a level of trust needed, but at least getting into your network has the extra required step of compromising the server. From drewbenson at netjack.com Thu May 7 14:12:00 2020 From: drewbenson at netjack.com (Andrew Benson) Date: Thu, 7 May 2020 09:12:00 -0500 Subject: Why so many servers? In-Reply-To: <72fbb3c1-c6ed-25e6-12c8-28193089fb8a@robertdherb.com> References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> <72fbb3c1-c6ed-25e6-12c8-28193089fb8a@robertdherb.com> Message-ID: Lol yeah I remember when I first heard about UPnP ? thinking that it was a completely insane idea. Maybe it?s not insane, but ? Somebody with more time on their hands than me needs to think of a new plan. > On May 7, 2020, at 9:05 AM, Robbie D wrote: > > > > On 5/7/2020 12:44 AM, Daniel Feldman wrote: >> Way back in the mid-2000s, there was this idea that applications on >> the network could request holes in the firewall programmatically so >> they could do peer-to-peer connections through a protocol called UPnP. >> It never really took off > To be honest, I'm kind of glad it didn't. I've never seen an implementation that lets you do UPnP on a per-device basis, so you'd either let nothing through, or potentially let everything through. At that point, you have to REALLY trust that no device on your network is unsecure, and as we've seen from IoT devices, that's too much to ask of consumer devices. > > And even if there were per-device rules on UPnP... Well, then you're right back where you started, having to configure your router for each device you want to allow through. It's a tough problem to solve, and I think STUN took off because it's as easy (For the user) as UPnP, but doesn't punch holes in your PC. Of course, there is still a level of trust needed, but at least getting into your network has the extra required step of compromising the server. From forest.n.johnson at gmail.com Thu May 7 16:29:05 2020 From: forest.n.johnson at gmail.com (Forest Johnson) Date: Thu, 7 May 2020 11:29:05 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> <72fbb3c1-c6ed-25e6-12c8-28193089fb8a@robertdherb.com> Message-ID: I just thought you guys would get a kick out of this -- Hacker Traverses NATs using this one weird trick: Network Admins HATE him! http://samy.pl/pwnat/ http://samy.pl/chownat/ On Thu, May 7, 2020 at 9:12 AM Andrew Benson wrote: > > Lol yeah I remember when I first heard about UPnP ? thinking that it was a completely insane idea. > > Maybe it?s not insane, but ? > > Somebody with more time on their hands than me needs to think of a new plan. > > > On May 7, 2020, at 9:05 AM, Robbie D wrote: > > > > > > > > On 5/7/2020 12:44 AM, Daniel Feldman wrote: > >> Way back in the mid-2000s, there was this idea that applications on > >> the network could request holes in the firewall programmatically so > >> they could do peer-to-peer connections through a protocol called UPnP. > >> It never really took off > > To be honest, I'm kind of glad it didn't. I've never seen an implementation that lets you do UPnP on a per-device basis, so you'd either let nothing through, or potentially let everything through. At that point, you have to REALLY trust that no device on your network is unsecure, and as we've seen from IoT devices, that's too much to ask of consumer devices. > > > > And even if there were per-device rules on UPnP... Well, then you're right back where you started, having to configure your router for each device you want to allow through. It's a tough problem to solve, and I think STUN took off because it's as easy (For the user) as UPnP, but doesn't punch holes in your PC. Of course, there is still a level of trust needed, but at least getting into your network has the extra required step of compromising the server. > From j3s at c3f.net Thu May 7 18:47:26 2020 From: j3s at c3f.net (jes) Date: Thu, 7 May 2020 13:47:26 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> <72fbb3c1-c6ed-25e6-12c8-28193089fb8a@robertdherb.com> Message-ID: <8dc3f18f-c4af-d680-fd69-fc8c6c5983e8@c3f.net> On 5/7/20 11:29 AM, Forest Johnson wrote: > I just thought you guys would get a kick out of this -- Hacker > Traverses NATs using this one weird trick: Network Admins HATE him! > > http://samy.pl/pwnat/ > http://samy.pl/chownat/ This was a wonderful read. I have never considered how UDP traversal and NAT interact before. Wow. From joe at begriffs.com Fri May 8 05:37:43 2020 From: joe at begriffs.com (Joe Nelson) Date: Fri, 8 May 2020 00:37:43 -0500 Subject: Wiresharking libtls Message-ID: <20200508053743.bfse3tnnp5crmfzr@begriffs.com> Hey June, have you tried decrypting LibreSSL traffic in wireshark? I'm trying to do it with my irc bouncer, but I can't get it to work. My RSA private key isn't good enough because my program does a Diffie Hellman key exchange with Freenode and creates an ephemeral session key. Programs like Firefox consult an SSLKEYLOGFILE environment variable to log those keys, but I don't think LibreSSL does. I'm trying to implement this, but having trouble. If I could just extract the SSL connection from inside the tls structure (tls->ssl_conn, defined in tls_internal.h) then I think the following would work, but the internals of the structure are inaccessible to my program. Do you know how to extract the master key, or know a way to prevent the DH key exchange from happening? bool tls_dump_keylog(struct tls *tls, char *path) { size_t len_key, len_id; SSL_SESSION *sess; unsigned char key[256]; const unsigned char *id; FILE *fp; sess = SSL_get_session(tls->ssl_conn); if (!sess) { fprintf(stderr, "Failed to get SSL session for TLS\n"); return false; } len_key = SSL_SESSION_get_master_key(sess, key, sizeof key); id = SSL_SESSION_get_id(sess, &len_id); if ((fp = fopen(path, "w")) == NULL) { fprintf(stderr, "Unable to write keylog to '%s'\n", path); return false; } fputs("RSA Session-ID:", fp); _writehex(fp, id, len_id); fputs(" Master-Key:", fp); _writehex(fp, key, len_key); fputs("\n", fp); fclose(fp); return true; } From nicholasdrozd at gmail.com Fri May 8 15:08:32 2020 From: nicholasdrozd at gmail.com (Nicholas Drozd) Date: Fri, 8 May 2020 10:08:32 -0500 Subject: Feature Request: Automatic Discussion Threads for New Blog Posts Message-ID: This is a simple idea (I think). Frostbyte has a blog aggregator [fn:1]. When a new post is found, create a new discussion thread on Friends [fn:2] with a title along the lines of "New blog post: TITLE TITLE TITLE (AUTHOR AUTHOR)". That's it. OBJECTIONS AND REBUTTALS O: You just want this because six of the past eleven posts on the aggregator are yours. R: It does appear that I am the only one blogging right now. There are three April posts on the aggregator, and they are all mine. It would be nice to change that, and perhaps this would be a good way to spur others to do more blogging. O: This will lead to the forum getting flooded with "discussion threads". R: That sounds like a good problem to have, like having too much money. If so much blogging is getting done that the quality of the forum is affected, we can revisit the idea. But in that scenario, the community will by hypothesis be extremely active, which is great. O: This is a "request" for a feature. Why don't you just implement it yourself? R: Despite all the shit I talk, I'm actually not very good at making stuff, and I also don't understand how things like "email" and "websites" work. That said, I would be happy to help out once somebody has a crude working prototype. O: What if somebody's blog gets hacked? The forum will be flooded with bot ads for "MATTRESS SALE BIG DISCOUNT". R: Maybe. We'll cross that bridge when we get to it. O: If a new feed gets added to the aggregator, then all the posts from that blog will show up, and the forum will get flooded. R: Yes, that's something to keep in mind, and undoubtedly there are other similar issues. But really those are just implementation details. * Footnotes [fn:1] http://frostbyte.cc/blog.html [fn:2] Is there an official name for this forum? I refer to it as "Friends", which always makes me think of "friends of Bill" or "friends of Dorothy". From j3s at c3f.net Fri May 8 15:25:44 2020 From: j3s at c3f.net (jes) Date: Fri, 8 May 2020 10:25:44 -0500 Subject: Feature Request: Automatic Discussion Threads for New Blog Posts In-Reply-To: References: Message-ID: My 2c: I feel that these sorts of automatic posts in mailing lists make me pay less attention to them, because it makes the list feel less human and more robotic. Have you ever gotten so many automated emails of some sort in your inbox that you make a rule to shove them into a folder, never to open the folder again? I like Drew Devault's approach here, generally. From his blog: > Have a comment on one of my posts? > Start a discussion in my public inbox by sending > an email to ~sircmpwn/public-inbox at lists.sr.ht This way, mailing list content stays user-generated and doesn't feel so spammy. A mailto: response link could be embedded next to the posts on the aggregator, maybe that could serve as a solution? j3s From dfeldman.mn at gmail.com Fri May 8 19:00:26 2020 From: dfeldman.mn at gmail.com (Daniel Feldman) Date: Fri, 8 May 2020 14:00:26 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> <72fbb3c1-c6ed-25e6-12c8-28193089fb8a@robertdherb.com> Message-ID: On Thu, May 7, 2020 at 11:56 AM Forest Johnson wrote: > > I just thought you guys would get a kick out of this -- Hacker > Traverses NATs using this one weird trick: Network Admins HATE him! > That is one of the hackiest weirdest things I have ever seen :) and a perfect example of why a NAT is not the same as a firewall. Daniel From drewbenson at netjack.com Fri May 8 21:01:41 2020 From: drewbenson at netjack.com (Andrew Benson) Date: Fri, 8 May 2020 16:01:41 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> <72fbb3c1-c6ed-25e6-12c8-28193089fb8a@robertdherb.com> Message-ID: Super intertesting and cool. And good to know. I?m pretty sure a lot of people think that they essentially HAVE a firewall if they have NAT. > On May 8, 2020, at 2:00 PM, Daniel Feldman wrote: > > On Thu, May 7, 2020 at 11:56 AM Forest Johnson > wrote: >> >> I just thought you guys would get a kick out of this -- Hacker >> Traverses NATs using this one weird trick: Network Admins HATE him! >> > > That is one of the hackiest weirdest things I have ever seen :) and a > perfect example of why a NAT is not the same as a firewall. > Daniel From forest.n.johnson at gmail.com Fri May 8 21:23:41 2020 From: forest.n.johnson at gmail.com (Forest Johnson) Date: Fri, 8 May 2020 16:23:41 -0500 Subject: Why so many servers? In-Reply-To: References: <20200502042948.tf4fkhzlyeldvqxr@begriffs.com> <7c6a401b-e428-e817-17d0-ac8d145eff80@c3f.net> <47aa238b-e87d-6258-ab14-e7c90f56782f@saloits.com> <72fbb3c1-c6ed-25e6-12c8-28193089fb8a@robertdherb.com> Message-ID: Right, but it's important to note you have to run the pwnat server behind the NAT router for someone to be able to connect through your NAT router. And something tells me its not exactly reliable, LOL! I bet at very least it only works when STUN works. I think pwnat is basically STUN but with the weird ICMP trick to allow the server to discover the clients address. I believe STUN doesn't work on most mobile networks. Something about symmetric NAT versus asymmetric NAT. On Fri, May 8, 2020 at 4:01 PM Andrew Benson wrote: > > Super intertesting and cool. And good to know. > I?m pretty sure a lot of people think that they essentially HAVE a firewall if they have NAT. > > > On May 8, 2020, at 2:00 PM, Daniel Feldman wrote: > > > > On Thu, May 7, 2020 at 11:56 AM Forest Johnson > > wrote: > >> > >> I just thought you guys would get a kick out of this -- Hacker > >> Traverses NATs using this one weird trick: Network Admins HATE him! > >> > > > > That is one of the hackiest weirdest things I have ever seen :) and a > > perfect example of why a NAT is not the same as a firewall. > > Daniel > From joe at begriffs.com Sat May 9 02:01:43 2020 From: joe at begriffs.com (Joe Nelson) Date: Fri, 8 May 2020 21:01:43 -0500 Subject: Wiresharking libtls In-Reply-To: <20200508053743.bfse3tnnp5crmfzr@begriffs.com> References: <20200508053743.bfse3tnnp5crmfzr@begriffs.com> Message-ID: <20200509020143.wrkv4dwkqibm3i5x@begriffs.com> Joe Nelson wrote: > If I could just extract the SSL connection from inside the tls > structure (tls->ssl_conn, defined in tls_internal.h) then I think the > following would work, but the internals of the structure are > inaccessible to my program. Tried a craptastic hack on this branch: https://github.com/begriffs/picobounce/blob/keylogging/tls_debug.c I copied the struct definition from the internal header and put it in my own source file with a different name and casted to it. Still doesn't work because I'm seeing my error message "Failed to get SSL session for TLS" Really wanted to get Wireshark working for this. When I have to fall back to debug print statements rather than using tools I feel like I've lost. Maybe I should write a small MITM TLS proxy that logs the traffic in each direction to the terminal in different colors. From joe at begriffs.com Sat May 9 02:54:58 2020 From: joe at begriffs.com (Joe Nelson) Date: Fri, 8 May 2020 21:54:58 -0500 Subject: Wiresharking libtls In-Reply-To: <20200509020143.wrkv4dwkqibm3i5x@begriffs.com> References: <20200508053743.bfse3tnnp5crmfzr@begriffs.com> <20200509020143.wrkv4dwkqibm3i5x@begriffs.com> Message-ID: <20200509025458.3sbfm76h6yms7yqd@begriffs.com> Joe Nelson wrote: > Really wanted to get Wireshark working for this. When I have to fall > back to debug print statements rather than using tools I feel like I've > lost. Update, I got it working! The TLS handshake hadn't happened when I was attempting to get the session id and ephemeral key. Now I got them and Wireshark showed me the decrypted communication under the "Follow TLS" option. I'll stop spamming the list about this. From dave at 19a6.net Sun May 10 15:30:22 2020 From: dave at 19a6.net (Dave Bucklin) Date: Sun, 10 May 2020 10:30:22 -0500 Subject: Feature Request: Automatic Discussion Threads for New Blog Posts In-Reply-To: References: Message-ID: <20200510153022.GC27219@19a6.tech> On Fri, May 08, 2020 at 10:08:32AM -0500, Nicholas Drozd wrote: > This is a simple idea (I think). Frostbyte has a blog > aggregator [fn:1]. When a new post is found, create a new discussion > thread on Friends [fn:2] with a title along the lines of > "New blog post: TITLE TITLE TITLE (AUTHOR AUTHOR)". That's it. I agree with j3s on this -- I think having automated threads makes them easy to ignore. I think it would be more valuable if the author emailed the list to 1) advertise the blog post and 2) prompt discussion on one or more aspects of it. I think Joe has done a good job of this in the past, sending a brief announcement that includes some of the challenges and discoveries he made along the way. I loved your OBJECTIONS AND REBUTTALS section. I'll have to remember that tactic. From joe at begriffs.com Sun May 10 17:22:21 2020 From: joe at begriffs.com (Joe Nelson) Date: Sun, 10 May 2020 12:22:21 -0500 Subject: Feature Request: Automatic Discussion Threads for New Blog Posts In-Reply-To: References: Message-ID: <20200510172221.m5ddhfrrsnx7ttns@begriffs.com> > OBJECTIONS AND REBUTTALS St. Nicholas Aquinas Drozd has joined the chat. :) https://www.ccel.org/ccel/aquinas/summa.FP_Q1_A1.html > O: This will lead to the forum getting flooded with "discussion > threads". > > R: That sounds like a good problem to have, like having too much > money. If so much blogging is getting done that the quality of the > forum is affected, we can revisit the idea. But in that scenario, the > community will by hypothesis be extremely active, which is great. I answer that, -==- Discussion threads are differentiated according to {\ ssss /} the various means through which knowledge is { \sS""Ss/ } obtained. For the web commenter and the emailer { SS\__/SS } both may prove the same conclusion: that the code, { /`\/`\ } for instance, is sound: the commenter by means of {_| ( ) |_} invective (i.e. public display), but the emailer \/)(\/ by means of threads themselves. Hence there is no | | reason why those things which may be learned from | | blogs, so far as they can be known by natural / \ reason, may not also be taught us by an email list `""""""` so far as they fall within revelation by SMTP. Hence messages included in the list differ in kind from those comments which are part of blogs. QED. More seriously though, I like the personal quality of our list. Right now people post each message intentionally, expecting to discuss it. Auto-posts make it less personal. That said, it sounds like a cool experiment. I could create a separate list like blogs at talk.begriffs.com to carry the auto-posts and then anyone interested could subscribe to it. > O: This is a "request" for a feature. Why don't you just implement it > yourself? > > R: Despite all the shit I talk, I'm actually not very good at making > stuff, and I also don't understand how things like "email" and > "websites" work. That said, I would be happy to help out once somebody > has a crude working prototype. Something like this? https://pypi.org/project/rss2email/ From joe at begriffs.com Mon May 25 01:14:06 2020 From: joe at begriffs.com (Joe Nelson) Date: Sun, 24 May 2020 20:14:06 -0500 Subject: PGCon available for free online next week! Message-ID: <20200525011406.GA55750@begriffs.com> The PostgreSQL conference PGCon will be happening online and it's free this year. I attended in years prior, and recommend it wholeheartedly. It has tutorials, and deep technical information from contributors, so you can see where the project is heading. https://www.pgcon.org/2020/ From mxu at uribe.cc Wed May 27 12:03:35 2020 From: mxu at uribe.cc (Mauricio Uribe) Date: Wed, 27 May 2020 08:03:35 -0400 Subject: PGCon available for free online next week! In-Reply-To: <20200525011406.GA55750@begriffs.com> References: <20200525011406.GA55750@begriffs.com> Message-ID: On 5/24/20 9:14 PM, Joe Nelson wrote: > The PostgreSQL conference PGCon will be happening online and it's free > this year. I attended in years prior, and recommend it wholeheartedly. > It has tutorials, and deep technical information from contributors, so > you can see where the project is heading. > > https://www.pgcon.org/2020/ > Thanks for sharing this Joe! I actually was looking for a conference to (virtually) attend. Originally i was looking for a linux- or python-centric conf....but PGCon sounds quite interesting! -- Best regards, Mauricio ("mxu") Uribe