cloud backend: "excessive reactor delay" warning in foolscap log during crawler run #1888

Closed
opened 2012-12-05 04:18:10 +00:00 by davidsarah · 5 comments
davidsarah commented 2012-12-05 04:18:10 +00:00
Owner
04:12:29.018 L20 []#1001 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffn%2F>>
04:12:31.475 L20 []#1002 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffn%2F>>
04:12:31.476 L20 []#1003 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffo%2F>>
04:12:32.140 L20 []#1004 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffo%2F>>
04:12:32.141 L20 []#1005 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffp%2F>>
04:12:33.387 L10 []#1006 get_stats() -> {'stats': {'storage_server.accepting_immutable_shares': 1, 'load_monitor.max_load': 1.1067259311676025, 'load_monitor.avg_load': 0.042395317554473878, 'node.uptime': 327.79333305358887, 'cpu_monitor.1min_avg': 0.00066662406928675157, 'cpu_monitor.total': 9.1099999999999994, 'storage_server.allocated': 0, 'storage_server.total_bucket_count': 2, 'cpu_monitor.5min_avg': 0.0032332485251684402}, 'counters': {}}
04:12:35.599 L20 []#1007 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffp%2F>>
04:12:35.600 L20 []#1008 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffq%2F>>
04:12:37.038 L23 []#1009 excessive reactor delay ({'incarnation': ('\x93G\x9ac\x98\xac`\xc7', None), 'level': 23, 'format': 'excessive reactor delay (%ss)', 'args': (1.160836935043335,), 'num': 1009, 'time': 1354680757.0389249}s)

04:12:37.048 L20 []#1010 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffq%2F>>
04:12:37.048 L20 []#1011 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffr%2F>>
04:12:38.564 L20 []#1012 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffr%2F>>
04:12:38.565 L20 []#1013 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffs%2F>>
04:12:40.639 L23 []#1014 excessive reactor delay ({'incarnation': ('\x93G\x9ac\x98\xac`\xc7', None), 'level': 23, 'format': 'excessive reactor delay (%ss)', 'args': (1.1003279685974121,), 'num': 1014, 'time': 1354680760.639235}s)
04:12:40.640 L20 []#1015 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffs%2F>>
04:12:40.648 L20 []#1016 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Fft%2F>>
04:12:41.921 L20 []#1017 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Fft%2F>>
04:12:41.922 L20 []#1018 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffu%2F>>
04:12:43.097 L20 []#1019 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffu%2F>>
04:12:43.098 L20 []#1020 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffv%2F>>
04:12:45.710 L20 []#1021 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffv%2F>>
04:12:45.711 L20 []#1022 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffw%2F>>
04:12:47.089 L20 []#1023 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffw%2F>>

I think this was a bucket counter run. It was on an LAE storage server (a t1.micro EC2 instance), so it is possible that the instance was using too much CPU and was throttled, but I'm not sure of that.

``` 04:12:29.018 L20 []#1001 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffn%2F>> 04:12:31.475 L20 []#1002 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffn%2F>> 04:12:31.476 L20 []#1003 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffo%2F>> 04:12:32.140 L20 []#1004 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffo%2F>> 04:12:32.141 L20 []#1005 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffp%2F>> 04:12:33.387 L10 []#1006 get_stats() -> {'stats': {'storage_server.accepting_immutable_shares': 1, 'load_monitor.max_load': 1.1067259311676025, 'load_monitor.avg_load': 0.042395317554473878, 'node.uptime': 327.79333305358887, 'cpu_monitor.1min_avg': 0.00066662406928675157, 'cpu_monitor.total': 9.1099999999999994, 'storage_server.allocated': 0, 'storage_server.total_bucket_count': 2, 'cpu_monitor.5min_avg': 0.0032332485251684402}, 'counters': {}} 04:12:35.599 L20 []#1007 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffp%2F>> 04:12:35.600 L20 []#1008 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffq%2F>> 04:12:37.038 L23 []#1009 excessive reactor delay ({'incarnation': ('\x93G\x9ac\x98\xac`\xc7', None), 'level': 23, 'format': 'excessive reactor delay (%ss)', 'args': (1.160836935043335,), 'num': 1009, 'time': 1354680757.0389249}s) 04:12:37.048 L20 []#1010 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffq%2F>> 04:12:37.048 L20 []#1011 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffr%2F>> 04:12:38.564 L20 []#1012 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffr%2F>> 04:12:38.565 L20 []#1013 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffs%2F>> 04:12:40.639 L23 []#1014 excessive reactor delay ({'incarnation': ('\x93G\x9ac\x98\xac`\xc7', None), 'level': 23, 'format': 'excessive reactor delay (%ss)', 'args': (1.1003279685974121,), 'num': 1014, 'time': 1354680760.639235}s) 04:12:40.640 L20 []#1015 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffs%2F>> 04:12:40.648 L20 []#1016 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Fft%2F>> 04:12:41.921 L20 []#1017 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Fft%2F>> 04:12:41.922 L20 []#1018 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffu%2F>> 04:12:43.097 L20 []#1019 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffu%2F>> 04:12:43.098 L20 []#1020 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffv%2F>> 04:12:45.710 L20 []#1021 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffv%2F>> 04:12:45.711 L20 []#1022 Starting factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffw%2F>> 04:12:47.089 L20 []#1023 Stopping factory <HTTPClientFactory: <http://lae-f7eace39-bph5vpnzvzbt7bifap7pbevyodpsmyy7.s3.amazonaws.com/?prefix=shares%2Ffw%2F>> ``` I think this was a bucket counter run. It was on an LAE storage server (a t1.micro EC2 instance), so it is possible that the instance was using too much CPU and was throttled, but I'm not sure of that.
tahoe-lafs added the
unknown
normal
defect
1.9.2
labels 2012-12-05 04:18:10 +00:00
tahoe-lafs added this to the undecided milestone 2012-12-05 04:18:10 +00:00
davidsarah commented 2012-12-05 04:19:07 +00:00
Author
Owner

If I'm reading the log correctly, the delays were for 1.16 and 1.10 seconds.

If I'm reading the log correctly, the delays were for 1.16 and 1.10 seconds.
tahoe-lafs modified the milestone from undecided to 1.12.0 2013-07-22 20:49:12 +00:00

Milestone renamed

Milestone renamed
warner modified the milestone from 1.12.0 to 1.13.0 2016-03-22 05:02:25 +00:00

renaming milestone

renaming milestone
warner modified the milestone from 1.13.0 to 1.14.0 2016-06-28 18:17:14 +00:00

Moving open issues out of closed milestones.

Moving open issues out of closed milestones.
exarkun modified the milestone from 1.14.0 to 1.15.0 2020-06-30 14:45:13 +00:00

The established line of development on the "cloud backend" branch has been abandoned. This ticket is being closed as part of a batch-ticket cleanup for "cloud backend"-related tickets.

If this is a bug, it is probably genuinely no longer relevant. The "cloud backend" branch is too large and unwieldy to ever be merged into the main line of development (particularly now that the Python 3 porting effort is significantly underway).

If this is a feature, it may be relevant to some future efforts - if they are sufficiently similar to the "cloud backend" effort - but I am still closing it because there are no immediate plans for a new development effort in such a direction.

Tickets related to the "leasedb" are included in this set because the "leasedb" code is in the "cloud backend" branch and fairly well intertwined with the "cloud backend". If there is interest in lease implementation change at some future time then that effort will essentially have to be restarted as well.

The established line of development on the "cloud backend" branch has been abandoned. This ticket is being closed as part of a batch-ticket cleanup for "cloud backend"-related tickets. If this is a bug, it is probably genuinely no longer relevant. The "cloud backend" branch is too large and unwieldy to ever be merged into the main line of development (particularly now that the Python 3 porting effort is significantly underway). If this is a feature, it may be relevant to some future efforts - if they are sufficiently similar to the "cloud backend" effort - but I am still closing it because there are no immediate plans for a new development effort in such a direction. Tickets related to the "leasedb" are included in this set because the "leasedb" code is in the "cloud backend" branch and fairly well intertwined with the "cloud backend". If there is interest in lease implementation change at some future time then that effort will essentially have to be restarted as well.
exarkun added the
wontfix
label 2020-10-30 12:35:44 +00:00
Sign in to join this conversation.
No Milestone
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Reference: tahoe-lafs/trac-2024-07-25#1888
No description provided.