i2pd torrent throughput issues compared to java version???

General I2P related talk
Post Reply
User avatar
cumlord
Posts: 33
Joined: Thu Oct 05, 2023 5:01 pm
Location: Erect, NC
Contact:

i2pd torrent throughput issues compared to java version???

Post by cumlord »

I've been trying to spread load with i2pd routers and have not been able to achieve the same throughput as with java routers. i2pd works great for transit tunnels and other services and barely touch the cpu especially on low power devices, also much easier to manage them and configure

but for torrents? max i've been able to get with i2pd is around 600kb in either direction with bigly bt. on same devices spun up java and back in megabit range.

i've noticed with i2pd the % of successful tunnels can be very low. it's like it has to keep making more tunnels constantly to keep up with the number of failed tunnels. curious if it could have something to do with java being more stringent on which routers it deems avoidable?

seems to me like the more client tunnels used the lower the success rate in i2pd. While java version doesn't seem to matter how many are used even if comparatively the transit tunnels are very low (and resource usage very high....)

anyone else ran into this or have ideas?
User avatar
lgillis
Posts: 137
Joined: Mon May 09, 2022 8:40 am

Re: i2pd torrent throughput issues compared to java version???

Post by lgillis »

A comparison under reproducible conditions would certainly be useful. You know, the same number of tunnels and hops used and of course the same number of participants; we also compare the Java router with the Java router built into BBT, etc.
User avatar
cumlord
Posts: 33
Joined: Thu Oct 05, 2023 5:01 pm
Location: Erect, NC
Contact:

Re: i2pd torrent throughput issues compared to java version???

Post by cumlord »

you're right, here's what i've used that for me at least is reproducible. I've not compared the built in java router as i run bigly bt and routers on separate machines that are ssh'd into.

biglybt (3.5.0.0) settings on machine 1: under i2p helper 3 tunnels down, 10 tunnels up, 3 hops, "I2p only" and "Other" enabled (not mixed), automatically adjust tunnel quantities based on load enabled, both DHT options enabled. All other non-i2p functionalities disabled (ex mainline dht, distributed DB, chat)

i2pd (2.50.2) on machine 2: mostly default settings, bandwidth x/100, floodfill off, some port modifications for multiple routers and adjust tunnel quantity (increased a little for stability since they seem to fail often)

i2p (2.4.0-0-2) on machine 2: floodfill disabled, bandwidth share 90%


portion of i2pd config:

Code: Select all

bandwidth = X
share = 100
notransit = false
floodfill = false

[ntcp2]
enabled = true

[ssu2]
enabled = true

[httpproxy]
enabled = true
address = 127.0.0.1
port = XXXX
keys = http-proxy-keys.dat
addresshelper = true
inbound.length = 3
inbound.quantity = 5
outbound.length = 3
outbound.quantity = 5
signaturetype = 7


[socksproxy]
enabled = true
address = 127.0.0.1
port = XXXXX
keys = socks-proxy-keys.dat

[sam]
enabled = true
address = 127.0.0.1
port = XXXXX

[bob]
enabled = false

[i2cp]
enabled = true
address = 127.0.0.1
port = XXXXX

[precomputation]
elgamal = true

[upnp]
enabled = true
name = I2Pd

[reseed]
verify = true

[limits]
transittunnels = 5000
openfiles = 0
coresize = 0

[exploratory]
inbound.length = 3
inbound.quantity = 10
outbound.length = 3
outbound.quantity = 10

[persist]
profiles = true
addressbook = true

[cpuext]
aesni = true
I also followed their instructions to override systemd service defaults, but this didn't seem to change anything in my case:

Code: Select all

mkdir -p /etc/systemd/system/i2pd.service.d/
touch /etc/systemd/system/i2pd.service.d/override.conf

[Service]
LimitNOFILE=16386
LimitCORE=infinity

with this instance of bigly bt and this router i2pd ran between 400-700kbs and utilized about 30% of CPU. I knew from my "main" router (java, goal is to shut it down as i realized it's overkill) i aught to be seeing closer to 2-4mbs and much higher for multiple bigly bt instances. I left it like that for a while (over a week) to see if maybe it just wasn't well integrated

so i installed a fresh java install on that same machine (floodfill disabled, share 90% bandwidth), turned i2pd off, and within about 30mins it was around 1500kbs. CPU usage is maxed out at this speed though so i think 1500 is the max i can get from java on that series of machine.

I have tried reverting to i2pd just to be sure, and it goes back to 400-700kbs.

Anecdotal observations:
  • I've tried this on 2 other types of low-powered machines with different os's, and in those cases the upload range was about the same, 400-700kbs. In all cases the routers were well integrated keeping 5000 transit tunnels occasionally doing more than 4mbs of transit, and cpu usage was below 50%.
  • At one point i limited the number of transit tunnels, thinking maybe they were handling too much transit. Past a certain point this seemed to severely degrade performance especially with <1000 tunnels, by then there was a very high rate of tunnel failure (<10% success rate) and speeds would cut out from outright tunnel failure in either direction or would otherwise often be 100-200kbs.
  • in the 3 devices i've tried this on it can take several minutes for tunnels to be built for bigly bt. During this process many tunnels will fail. In comparison to the device where i tried the java install, the tunnels were built nearly instantly.
  • seems like certain tunnels in i2pd are highly preferred, it will seem to shunt the majority of traffic through 1-3 tunnels, and seemingly not really use the other ones, then those high traffic tunnels fail. in java, bandwidth seems to be spread out more evenly.
not sure if problem with my setup, maybe some kind of issue with how biglybt communicates with the router or something else that causes a lot of apparent tunnel failure or any other reason for the slow speeds, but appreciate the time if anyone has some advice
Post Reply