Re: [dtn-users] A problem with dtntunnel

"Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT]" <david.a.zoller@nasa.gov> Thu, 13 December 2012 13:46 UTC

Return-Path: <david.a.zoller@nasa.gov>
X-Original-To: dtn-users@ietfa.amsl.com
Delivered-To: dtn-users@ietfa.amsl.com
Received: from localhost (localhost [127.0.0.1]) by ietfa.amsl.com (Postfix) with ESMTP id 2609A21F8979 for <dtn-users@ietfa.amsl.com>; Thu, 13 Dec 2012 05:46:41 -0800 (PST)
X-Virus-Scanned: amavisd-new at amsl.com
X-Spam-Flag: NO
X-Spam-Score: -6.197
X-Spam-Level:
X-Spam-Status: No, score=-6.197 tagged_above=-999 required=5 tests=[AWL=-0.201, BAYES_00=-2.599, HTML_MESSAGE=0.001, J_CHICKENPOX_92=0.6, NORMAL_HTTP_TO_IP=0.001, RCVD_IN_DNSWL_MED=-4, WEIRD_PORT=0.001]
Received: from mail.ietf.org ([64.170.98.30]) by localhost (ietfa.amsl.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id 26t6QlJ378Vw for <dtn-users@ietfa.amsl.com>; Thu, 13 Dec 2012 05:46:30 -0800 (PST)
Received: from ndmsnpf01.ndc.nasa.gov (ndmsnpf01.ndc.nasa.gov [198.117.0.121]) by ietfa.amsl.com (Postfix) with ESMTP id 0E48021F887A for <dtn-users@irtf.org>; Thu, 13 Dec 2012 05:46:30 -0800 (PST)
Received: from ndmsppt04.ndc.nasa.gov (ndmsppt04.ndc.nasa.gov [198.117.0.103]) by ndmsnpf01.ndc.nasa.gov (Postfix) with ESMTP id AABB8260424; Thu, 13 Dec 2012 07:46:24 -0600 (CST)
Received: from ndmshub02.ndc.nasa.gov (ndmshub02-pub.ndc.nasa.gov [198.117.0.161]) by ndmsppt04.ndc.nasa.gov (8.14.5/8.14.5) with ESMTP id qBDDkONM018419; Thu, 13 Dec 2012 07:46:24 -0600
Received: from NDMSSCC05.ndc.nasa.gov ([198.117.2.174]) by ndmshub02.ndc.nasa.gov ([198.117.2.161]) with mapi; Thu, 13 Dec 2012 07:46:24 -0600
From: "Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT]" <david.a.zoller@nasa.gov>
To: "ssireskin@gmail.com" <ssireskin@gmail.com>
Date: Thu, 13 Dec 2012 07:46:23 -0600
Thread-Topic: [dtn-users] A problem with dtntunnel
Thread-Index: Ac3ZE3Ox0Spq9bamQjqW9TXgtzao9QAI6Gyw
Message-ID: <04E3D99A62496240BCD6A576813E6E31E0BDECDCE2@NDMSSCC05.ndc.nasa.gov>
References: <CAJR8z9--cVk67ac-aJ2haKpc=7LSVWHFXhTykaGcdpLQeevtiQ@mail.gmail.com> <04E3D99A62496240BCD6A576813E6E31E0BDBBA4FA@NDMSSCC05.ndc.nasa.gov> <CAJR8z98cbUhEPMyzR4Syp+Cd1xcg3Ei3u-UjykCQCGo2rBe9QA@mail.gmail.com> <04E3D99A62496240BCD6A576813E6E31E0BDBBA685@NDMSSCC05.ndc.nasa.gov> <CAJR8z98JoR2BGaSLer+u9k=Ok6iFroO0puqkDG2RpCzf=AxXng@mail.gmail.com> <04E3D99A62496240BCD6A576813E6E31E0BDECD834@NDMSSCC05.ndc.nasa.gov> <CAJR8z982h=jJSrEVqMQSbpi_7+yP_XRu4P-BNUU7ZAntDrkyyA@mail.gmail.com> <04E3D99A62496240BCD6A576813E6E31E0BDECD996@NDMSSCC05.ndc.nasa.gov> <CAJR8z98s9EAuBw2aYr6EjEwFgMLLSKykrzdo_U0zrKRzhuHmFQ@mail.gmail.com> <04E3D99A62496240BCD6A576813E6E31E0BDECDC13@NDMSSCC05.ndc.nasa.gov> <CAJR8z99A3M5FU_Xn6RXja+B=mQYMZ64AHa2HajtcN-AFXTn-xQ@mail.gmail.com>
In-Reply-To: <CAJR8z99A3M5FU_Xn6RXja+B=mQYMZ64AHa2HajtcN-AFXTn-xQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
acceptlanguage: en-US
Content-Type: multipart/alternative; boundary="_000_04E3D99A62496240BCD6A576813E6E31E0BDECDCE2NDMSSCC05ndcn_"
MIME-Version: 1.0
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:5.9.8327, 1.0.431, 0.0.0000 definitions=2012-12-13_05:2012-12-13, 2012-12-13, 1970-01-01 signatures=0
Cc: "dtn-users@irtf.org" <dtn-users@irtf.org>
Subject: Re: [dtn-users] A problem with dtntunnel
X-BeenThere: dtn-users@irtf.org
X-Mailman-Version: 2.1.12
Precedence: list
List-Id: "The Delay-Tolerant Networking Research Group \(DTNRG\) - Users." <dtn-users.irtf.org>
List-Unsubscribe: <https://www.irtf.org/mailman/options/dtn-users>, <mailto:dtn-users-request@irtf.org?subject=unsubscribe>
List-Archive: <http://www.irtf.org/mail-archive/web/dtn-users>
List-Post: <mailto:dtn-users@irtf.org>
List-Help: <mailto:dtn-users-request@irtf.org?subject=help>
List-Subscribe: <https://www.irtf.org/mailman/listinfo/dtn-users>, <mailto:dtn-users-request@irtf.org?subject=subscribe>
X-List-Received-Date: Thu, 13 Dec 2012 13:46:41 -0000

Hi Sergey,
I see now the end to end effect you are wanting and I'll experiment some more and get back with you but it may be a few days.

I guess one question is:
If both ends are not up in "near real time" would you want the sender to stay connected until the receiving end comes on line and can return the FIN in all circumstances?
DZ

David Zoller
COLSA Corporation
HOSC / C107
*Office: (256) 544-1820
*EMail: david.a.zoller@nasa.gov<mailto:david.a.zoller@nasa.gov>

From: ssireskin@gmail.com [mailto:ssireskin@gmail.com]
Sent: Thursday, December 13, 2012 3:23 AM
To: Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT]
Cc: dtn-users@irtf.org
Subject: Re: [dtn-users] A problem with dtntunnel

Hi David,

I'll start from the end. I have repeated the test with 100 parallel flows twice, and it passed flawlessly. I think that was my mistake, I could have counted
the number of erased connections too early, before all of 100 connections have terminated.

Now to the first part. I have replaced while(1) with while(!sock_eof) as you suggested. This leads to very strange results when sending a small file with nc. Note the last two SYN and RST packets on the receiver side.

Sender: tcpdump -nn -npi lo port 9999 or port 19999
13:10:35.136283 IP 192.168.1.2.32880 > 192.168.1.2.19999: Flags [S], seq 2925076366, win 5840, options [mss 1460,sackOK,TS val 90705589 ecr 0,nop,wscale 7], length 0
13:10:35.136346 IP 192.168.4.1.9999 > 192.168.1.2.32880: Flags [S.], seq 2931196344, ack 2925076367, win 32768, options [mss 16396,sackOK,TS val 90705589 ecr 90705589,nop,wscale 7], length 0
13:10:35.136371 IP 192.168.1.2.32880 > 192.168.1.2.19999: Flags [.], ack 2931196345, win 46, options [nop,nop,TS val 90705589 ecr 90705589], length 0
13:10:35.136799 IP 192.168.1.2.32880 > 192.168.1.2.19999: Flags [P.], seq 0:1, ack 1, win 46, options [nop,nop,TS val 90705590 ecr 90705589], length 1
13:10:35.136819 IP 192.168.4.1.9999 > 192.168.1.2.32880: Flags [.], ack 2, win 256, options [nop,nop,TS val 90705590 ecr 90705590], length 0
13:10:35.136892 IP 192.168.1.2.32880 > 192.168.1.2.19999: Flags [F.], seq 1, ack 1, win 46, options [nop,nop,TS val 90705590 ecr 90705590], length 0
13:10:35.176755 IP 192.168.4.1.9999 > 192.168.1.2.32880: Flags [.], ack 3, win 256, options [nop,nop,TS val 90705630 ecr 90705590], length 0
13:10:35.178602 IP 192.168.4.1.9999 > 192.168.1.2.32880: Flags [F.], seq 1, ack 3, win 256, options [nop,nop,TS val 90705631 ecr 90705590], length 0
13:10:35.178623 IP 192.168.1.2.32880 > 192.168.1.2.19999: Flags [.], ack 2, win 46, options [nop,nop,TS val 90705631 ecr 90705631], length 0
13:10:35.253829 IP 192.168.1.2.32881 > 192.168.1.2.19999: Flags [S], seq 2934914533, win 5840, options [mss 1460,sackOK,TS val 90705707 ecr 0,nop,wscale 7], length 0
13:10:35.253860 IP 192.168.4.1.9999 > 192.168.1.2.32881: Flags [S.], seq 2932249125, ack 2934914534, win 32768, options [mss 16396,sackOK,TS val 90705707 ecr 90705707,nop,wscale 7], length 0
13:10:35.253878 IP 192.168.1.2.32881 > 192.168.1.2.19999: Flags [.], ack 2932249126, win 46, options [nop,nop,TS val 90705707 ecr 90705707], length 0
13:10:36.380942 IP 192.168.4.1.9999 > 192.168.1.2.32881: Flags [F.], seq 1, ack 1, win 256, options [nop,nop,TS val 90706834 ecr 90705707], length 0
13:10:36.381495 IP 192.168.1.2.32881 > 192.168.1.2.19999: Flags [.], ack 2, win 46, options [nop,nop,TS val 90706835 ecr 90706834], length 0
13:10:36.398070 IP 192.168.1.2.32881 > 192.168.1.2.19999: Flags [F.], seq 0, ack 2, win 46, options [nop,nop,TS val 90706851 ecr 90706834], length 0
13:10:36.398131 IP 192.168.4.1.9999 > 192.168.1.2.32881: Flags [.], ack 2, win 256, options [nop,nop,TS val 90706851 ecr 90706851], length 0

Receiver: tcpdump -nn -npi lo port 9999 or port 19999
13:10:35.185211 IP 192.168.4.1.44550 > 192.168.4.1.9999: Flags [S], seq 2927123945, win 32792, options [mss 16396,sackOK,TS val 90696412 ecr 0,nop,wscale 7], length 0
13:10:35.185266 IP 192.168.4.1.9999 > 192.168.4.1.44550: Flags [S.], seq 2929621602, ack 2927123946, win 32768, options [mss 16396,sackOK,TS val 90696412 ecr 90696412,nop,wscale 7], length 0
13:10:35.185286 IP 192.168.4.1.44550 > 192.168.4.1.9999: Flags [.], ack 1, win 257, options [nop,nop,TS val 90696412 ecr 90696412], length 0
13:10:35.206818 IP 192.168.4.1.44550 > 192.168.4.1.9999: Flags [P.], seq 1:2, ack 1, win 257, options [nop,nop,TS val 90696434 ecr 90696412], length 1
13:10:35.206861 IP 192.168.4.1.9999 > 192.168.4.1.44550: Flags [.], ack 2, win 256, options [nop,nop,TS val 90696434 ecr 90696434], length 0
13:10:35.222032 IP 192.168.4.1.44550 > 192.168.4.1.9999: Flags [F.], seq 2, ack 1, win 257, options [nop,nop,TS val 90696449 ecr 90696434], length 0
13:10:35.222133 IP 192.168.4.1.9999 > 192.168.4.1.44550: Flags [F.], seq 1, ack 3, win 256, options [nop,nop,TS val 90696449 ecr 90696449], length 0
13:10:35.222146 IP 192.168.4.1.44550 > 192.168.4.1.9999: Flags [.], ack 2, win 257, options [nop,nop,TS val 90696449 ecr 90696449], length 0
13:10:36.325396 IP 192.168.4.1.44551 > 192.168.4.1.9999: Flags [S], seq 2945716889, win 32792, options [mss 16396,sackOK,TS val 90697552 ecr 0,nop,wscale 7], length 0
13:10:36.325447 IP 192.168.4.1.9999 > 192.168.4.1.44551: Flags [R.], seq 0, ack 2945716890, win 0, length 0

Another strange thing happend when I have run sending nc and forgot to run the receiving nc. Just try it yourself and see tcpdump.

I have also conducted another test with the original dtntunnel code. I have run nc in the interactive mode, i.e. nc <ip> <port> on the client side and nc -l <port> on the server side. Then I terminated the connection on the client side by pressing CTRL+D. The server received a FIN packed from the client, replied with another FIN, and terminated. The client did not receive the FIN packet from the server and did not terminate. After that I have established a new connection and terminated it by pressing CTRL+D on the server side. This time server did send a FIN packet, the client got it, replied with its own FIN and terminated. The server did not receive the FIN from the client and continued running.

This makes me think that dtntunnel, that have received the final FIN from a TCP application, doesn't transmit this FIN to its remote dtntunnel peer. Or the latter one doesn't retransmit this final FIN to the TCP application on its side. I'm not a TCP and sockets guru, so I cannot figure out how to translate this to the language of sockets, which I think is important to do in order to understand how to fix dtntunnel.

I don't think that implementing an option that configures dtntunnel behavior in relation to closing connections is a good idea. I believe that there is a flaw in dtntunnel logic, and it could be fixed in a way that dtntunnel determines itself when to close a connection to a TCP application.
2012/12/13 Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT] <david.a.zoller@nasa.gov<mailto:david.a.zoller@nasa.gov>>
Hi Sergey,
Research indicates that the TIME_WAIT state is a feature of TCP to prevent delayed packets for connection from being delivered to a later connection between the same hosts. [http://tools.ietf.org/html/draft-faber-time-wait-avoidance-00]

Based on that, the added "if (sock_eof)" block can be removed and the while condition can be changed to "while (!sock_eof)" and you get the same result.

But, looking at the intent of the code and at the oasys::IPSocket, it appears that it is possible to establish a socket connection and then only close the read side while keeping the write side open. I've never run across a socket used in that manner but that is what it looks like to me. So, the TCPTunnel::Connection::run  while loop allows for the read side to close and continue transmitting bundle payloads on the write side until it gets a bundle with the EOF bit set. If there are no bundles coming back the other way as is the case here then the while loop never exits which keeps the netcat connection open.

So, based on that, I think a better solution would be to implement an option that configures dtntunnel to close the connection when the read side terminates. (Someone in the know can weigh in) Your solution is probably sufficient for your test.

--

As to connections not being removed from the table and sockets not being closed...

I set up a test to kick off 100 netcats on an 11MB file so that the connection would be open for several seconds:
#!/bin/bash
ctr=0
while [ $ctr -lt 100 ]; do
    nc -w 10 x.x.x.x 12345 < data11mg  &
    let ctr=ctr+1
done

I also added some debug code to print out a connection accepted counter and closed counter as they are accepted and closed.
After 30 connections, the first close kicks in and then there is a mix of connects and closes until finished.
All of the connections were closed and I did not have any sockets left open.

On my first attempt I exceeded my payload quota and started seeing "error sending bundle: 141" messages so I don't think that is the issue you are running into.

You might try putting in similar debug to see if you can determine what is happening.

Hope this helps,
DZ


David Zoller
COLSA Corporation
HOSC / C107
*Office: (256) 544-1820
*EMail: david.a.zoller@nasa.gov<mailto:david.a.zoller@nasa.gov>

From: ssireskin@gmail.com<mailto:ssireskin@gmail.com> [mailto:ssireskin@gmail.com<mailto:ssireskin@gmail.com>]
Sent: Wednesday, December 12, 2012 2:07 PM

To: Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT]
Cc: dtn-users@irtf.org<mailto:dtn-users@irtf.org>
Subject: Re: [dtn-users] A problem with dtntunnel

Hi David,

I meant exactly the same position before the end of the while loop, just described it incorrect.

Please let me know if you find the solution for this problem.

Best regards,
Sergey
2012/12/12 Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT] <david.a.zoller@nasa.gov<mailto:david.a.zoller@nasa.gov>>
Hi Sergey,
I was narrowing in on the same track... The FIN results in a returned length of zero on the read and errno is also zero so you have to key on the return length which is already in the code.

You may have added a couple of lines of debug code and it should be inserted just before the end of the while(1) loop at line 491 in the latest version on Sourceforge.

I am seeing the sending socket stay in the TIME_WAIT mode even if I add a call to sock_.shutdown(SHUT_RDWR) before closing the socket;
Getting close I think.

The 0.0.0.0 was because I did not have anything at the other end of the tunnel and just stuck some easy numbers in there.
Thanks,
DZ

David Zoller
COLSA Corporation
Marshall Space Flight Center

From: ssireskin@gmail.com<mailto:ssireskin@gmail.com> [mailto:ssireskin@gmail.com<mailto:ssireskin@gmail.com>]
Sent: Wednesday, December 12, 2012 11:06 AM

To: Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT]
Cc: dtn-users@irtf.org<mailto:dtn-users@irtf.org>
Subject: Re: [dtn-users] A problem with dtntunnel

Hi David,

I added the following code at the line 492 of TCPTunnel.cc and it seems that it helped.

    if (sock_eof) {
        sock_.close();
        goto done;
    }

The senders, both nc and iperf, exit after having finished sending the data. There are
no more sockets in CLOSE_WAIT state. However I am completely unsure whether
my code is correct and is in correct place.

Now I have encountered another problem. When iperf test is run with 100 parallel flows,
some dtntunnel connections are not removed from dtntunel's connection table, and the
related sockets are not closed. I suppose this happens because my code addition is far
from perfect.

And one more question, what does 0.0.0.0 mean in -T tunnel specification? Is it just an
example host, or 0.0.0.0 has some special meaning for dtntunnel?
2012/12/12 Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT] <david.a.zoller@nasa.gov<mailto:david.a.zoller@nasa.gov>>
Hi Sergey,
I have duplicated what you are reporting and have to retract my earlier statement saying that this is the netcat design and not a problem :)
The sender netcat attempts to close the socket when it gets the [EOF] and not in response to the receiver closing its end.


netcat sender:
cat /etc/passwd | nc x.x.x.x 12345

scenario 1 - netcat receiver with the keep listener option:
                nc -k -l 12345

*         Sender terminates and receiver stays alive and listening for a re-connect

*         FIN is sent in both directions initiated by the sender

scenario 2 - dtntunnel receiver:
                dtntunnel -t -T 12345:0.0.0.0:54321<http://0.0.0.0:54321> dtn://desteid/xxxx

*         Sender does not terminate

*         FIN is sent from sender to receiver but not the other way

*         Sender socket is in state FIN_WAIT2

*         Receiver socket is in state CLOSE_WAIT
Kill the sender and repeat...

*         Sender does not terminate

*         FIN is sent from sender to receiver but not the other way

*         New Sender socket is in state FIN_WAIT2 and the first one has timed out and died

*         Both Receiver sockets are in state CLOSE_WAIT
Kill the dtntunnel receiver...

*         2 FINs are sent from the receiver - 1 to each of the sockets (even though the other ends have expired)


I believe the issue would be in the oasys IPSocket or possibly the IPClient. I'll have a look at it unless there is a low-level socket expert out there that wants to give it a go...

Best regards,
DZ

David Zoller
COLSA Corporation
Marshall Space Flight Center

From: ssireskin@gmail.com<mailto:ssireskin@gmail.com> [mailto:ssireskin@gmail.com<mailto:ssireskin@gmail.com>]
Sent: Wednesday, December 12, 2012 4:38 AM

To: Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT]
Subject: Re: [dtn-users] A problem with dtntunnel

Hi David,

I have investigated that issue a little. I have found, that no matter whether netcat is run with or without the -w1 option, something
strange happens. According to netstat -ntp on the sending node, dtntunnel remains in CLOSE_WAIT state after nc finishes sending
data. I have run tcpdump on both sender and receiver, and it showed that final FIN from the receiver doesn't reach the sender.
This looked like a bug in dtntunnel to me. I decided to re-check with iperf, and again, tcpdump showed the same problem.

Here are the last lines of the tcpdump. Remember that in my setup all outgoing TCP packets with destination port 9999 are redirected
with the help of IPTables to the local dtntunnel process, listening om port 19999. That is why I run tcpdump on loopback interface.

Receiver (192.168.4.1): tcpdump -nn -npi lo port 9999 or port 19999
15:05:02.496366 IP 192.168.4.1.34284 > 192.168.4.1.9999: Flags [P.], seq 123995:131097, ack 1, win 257, options [nop,nop,TS val 11163723 ecr 11163718], length 7102
15:05:02.496428 IP 192.168.4.1.9999 > 192.168.4.1.34284: Flags [.], ack 131097, win 1154, options [nop,nop,TS val 11163724 ecr 11163723], length 0
15:05:03.457421 IP 192.168.4.1.34284 > 192.168.4.1.9999: Flags [F.], seq 131097, ack 1, win 257, options [nop,nop,TS val 11164684 ecr 11163724], length 0
15:05:03.497482 IP 192.168.4.1.9999 > 192.168.4.1.34284: Flags [.], ack 131098, win 1154, options [nop,nop,TS val 11164725 ecr 11164684], length 0
15:05:04.457762 IP 192.168.4.1.9999 > 192.168.4.1.34284: Flags [F.], seq 1, ack 131098, win 1154, options [nop,nop,TS val 11165685 ecr 11164684], length 0
15:05:04.457856 IP 192.168.4.1.34284 > 192.168.4.1.9999: Flags [.], ack 2, win 257, options [nop,nop,TS val 11165685 ecr 11165685], length 0

Sender (192.168.2.1): tcpdump -nn -npi lo port 9999 or port 19999
15:05:02.433755 IP 192.168.4.1.9999 > 192.168.1.2.43360: Flags [.], ack 127449, win 386, options [nop,nop,TS val 11172887 ecr 11172875], length 0
15:05:02.433776 IP 192.168.1.2.43360 > 192.168.1.2.19999: Flags [.], seq 127448:130344, ack 1, win 46, options [nop,nop,TS val 11172887 ecr 11172887], length 2896
15:05:02.433783 IP 192.168.1.2.43360 > 192.168.1.2.19999: Flags [P.], seq 130344:131096, ack 1, win 46, options [nop,nop,TS val 11172887 ecr 11172887], length 752
15:05:02.444839 IP 192.168.4.1.9999 > 192.168.1.2.43360: Flags [.], ack 131097, win 386, options [nop,nop,TS val 11172898 ecr 11172887], length 0
15:05:03.421915 IP 192.168.1.2.43360 > 192.168.1.2.19999: Flags [F.], seq 131096, ack 1, win 46, options [nop,nop,TS val 11173875 ecr 11172898], length 0
15:05:03.461742 IP 192.168.4.1.9999 > 192.168.1.2.43360: Flags [.], ack 131098, win 386, options [nop,nop,TS val 11173915 ecr 11173875], length 0

I believe that the cause of why the sending netcat without -w1 option doesn't exit, is that it doesn't receive the final FIN from the dtntunnel.
Can this issue be fixed with a small amount of work?
2012/12/5 Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT] <david.a.zoller@nasa.gov<mailto:david.a.zoller@nasa.gov>>
Hi Sergey,
I would not even call this a problem. netcat is designed to be extremely flexible and by default the sender keeps its connection open as long as the other end does. On the other hand the receiver by default terminates after receipt of an EOF but there is a switch to keep listening if that is the desired behavior.

With DTN, your sending nc may do its thing while the destination node is not even on line and then an hour later it becomes available and completes the transmission to the receiving nc. In this scenario, the sender would still be "hung up" possibly indefinitely waiting for a terminating signal.

I am not the DTN2 authHi ority, but, I don't see a change to dtntunnel for this as it would assume a specific usage that would probably break someone else's usage (like mine :)).

Best regards,
DZ

David Zoller
COLSA Corporation
Marshall Space Flight Center

From: ssireskin@gmail.com<mailto:ssireskin@gmail.com> [mailto:ssireskin@gmail.com<mailto:ssireskin@gmail.com>]
Sent: Wednesday, December 05, 2012 2:56 AM
To: Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT]
Cc: dtn-users@irtf.org<mailto:dtn-users@irtf.org>
Subject: Re: [dtn-users] A problem with dtntunnel

Hi David,

Thanks for your help, nc -w1 did the trick. But shouldn't dtntunnel behavior be changed? The receiving dtntunnel could signal the sending dtntunnel, that the listening nc has disconnected, so that the sending dtntunnel close its connection to the sending nc. Or is this solely the problem of nc?
2012/12/5 Zoller, David A. (MSFC-EO60)[HOSC SERVICES CONTRACT] <david.a.zoller@nasa.gov<mailto:david.a.zoller@nasa.gov>>
Hi Sergey,
On RHEL 5.7, I am running dtn-2.9.0 plus modifications that should not impact the behavior of dtntunnel.
I just ran your test without the iptables redirect and I see the same behavior.
I believe that the sender exits if you go directly from nc to nc because the receiver exits when it gets an end of file.
You can add a "-w 1" option to the sender nc so that it will timeout and exit after stdin is idle for 1 second.
Best regards,
DZ

David Zoller
COLSA Corporation
HOSC / C107
*Office: (256) 544-1820
*EMail: david.a.zoller@nasa.gov<mailto:david.a.zoller@nasa.gov>

From: dtn-users-bounces@irtf.org<mailto:dtn-users-bounces@irtf.org> [mailto:dtn-users-bounces@irtf.org<mailto:dtn-users-bounces@irtf.org>] On Behalf Of ssireskin@gmail.com<mailto:ssireskin@gmail.com>
Sent: Tuesday, December 04, 2012 11:13 AM
To: dtn-users@irtf.org<mailto:dtn-users@irtf.org>
Subject: [dtn-users] A problem with dtntunnel

Hello all!

I am using dtn-2.9.0 on a RHEL 6 based Linux distro and I am having a problem when using netcat (nc) with dtntunnel.
On the sender node I run "cat /etc/passwd | nc receiver_ip". On the receiver node I run "nc -l 9999". With the help
of Iptables port 9999 gets redirected to the port 19999, which is listened by dtntunnel.

The file /etc/passwd is successfully delivered to the receiver and is shown on the screen. After this the receiving nc exists.
No problem here. However, nc on the sender node doesn't exit after it sends the file. It continues to run forever. When I
run nc in the opposite direction, I get the same problem - the sending nc doesn't exit.

My configuration on both nodes is symmetric:
iptables -t nat -A OUTPUT -d $REMOTE_HOST -p tcp --dport 9999 -j DNAT --to $LOCAL_HOST:19999
dtntunnel -T $LOCAL_HOST:19999:$REMOTE_HOST:9999 $REMOTE_NODE/dtntunnel/nc -d
dtntunnel -L --local-eid $LOCAL_NODE/dtntunnel/nc -d

dtnping works ok in both directions.

Please give me any advice.

With best regards,
Sergey Sireskin



--
Kindest Regards

Sergey Sireskin
FGUP CNII EISU



--
Kindest Regards

Sergey Sireskin
FGUP CNII EISU



--
Best regards,
Sergey Sireskin



--
Best regards,
Sergey Sireskin



--
Best regards,
Sergey Sireskin