gfxgfx
 
Please login or register.

Login with username, password and session length
 
gfx gfx
gfx
76775 Posts in 13501 Topics by 1651 Members - Latest Member: insider4ever April 27, 2024, 02:50:44 am
*
gfx*gfx
gfx
WinMX World :: Forum  |  WinMX World Community  |  Winmxworld.com Strategic Directions  |  My Thoughts and Ideas About WinMX and New Client
gfx
gfxgfx
 

Author Topic: My Thoughts and Ideas About WinMX and New Client  (Read 8782 times)

0 Members and 1 Guest are viewing this topic.

Offline Plum

  • Core
  • *****
  • ***
  • I love WinMX!
My Thoughts and Ideas About WinMX and New Client
« on: April 11, 2012, 01:00:08 am »
I would like to share some thoughts, ideas, insights, and bugs.  Here are things I would like to see differently in the new client.

1.  Do not constantly hammer the settings.dat file, but only write to it if it has changed.  Hammering that file uselessly takes up system resources and can add to the wear and tear of newer hard drives.  Each cell in a SSD only has so many writes available in its lifetime.

2.  Flush the files to disk and the file system from time to time as they are being written.  WinMX has a flaw in that it doesn't regularly flush files to disk and only does it upon completion.  So lets say you download a very large file and halfway through, the program crashes.  Guess what?  You will have to start over from the point it was at before the transfer started.  If it was a partial, it will revert to original size, and if it didn't exist, then it won't exist.  In addition, there will be lost clusters in the file system.  It only takes up more bandwidth to have to download it all again.  So close and reopen the handles every so often or whatever the code requires to flush the files completely to the disk.

3.  Implement a final stage "filter" to make sure the results are what you entered.  This should be checked on both ends, both on the network end and on the client end.  Why?  Well, one flaw in WinMX is that when it sends out a file search request, it trusts that the other clients will only send what was requested.  In a perfect world with no rogue clients sitting out there, that model is fine.  But with the current attacks, it would be nice to have a final stage filter to double check that the results are what was requested.  Even if the protocols are updated to not have the vulnerability, this final check should still be added just in case a new disruptive client is invented.

In the recent attacks the problem is not fake results.  What we have are modified hubs which are sending EVERYTHING on a search request.  I wouldn't be so quick to blame the big dogs, since why would they want to spam us with valid but illegal content?  "Oh, don't share copyrighted stuff, so we will forcibly send you copyright stuff to make you stop downloading it."  No, I don't think so.  The attacks only alerted me to new things to download that I hadn't thought about before.

4.  Write several flavors so users can download the best for their hardware.  In other words, support SSE and other extensions, multi-core, and 64-bit, while still being compatible for those with older equipment.

5.  Write crucial parts in assembly if possible.  I would start out writing it all in C, then rewrite crucial parts in assembly depending on the architecture/OS.

6.  Adding support for other protocols is good, but I would be cautious here.  The original Gnutella network (G1) is constantly flooded with spam.  RIAA/MPAA, pornographers, fraudsters, spammers, you name it.  So I would not include support for it.  We don't want to cause cross-network attacks.  What if ALL the valid WinMX files and ALL the fraudlent G1 files were all in both networks?  We don't want that.  Adding G2 support is a little iffy.  I guess it is okay to include it as an option for advanced users, but default it to off.  As for the other stuff like OpenNap, ED2K, and DC++, sure, I don't see why they shouldn't be added.  Then someone really wanting certain files wouldn't have to run WinMX and Shareaza at the same time.

7.  Create some way to make OpenNAP stuff support swarming and for the WNP and OpenNAP transfers to be more compatible.

8.  Harden the protocol more from attacks.  Maybe make a newer protocol or even allow side by side access to both and the option to disable either one.

9.  Maybe multiple chat protocols. I am not a chat fan.  When I want to download, I want to download, and if I want to chat, I will use an instant messenger program for that.  But since so many complaints center around chats, allow for multiple protocols, where if one is compromised, another may work.  Maybe even cross-protocol chatting.

10.  Maybe add some testing to sniff out insecure or modified hubs, at least while in WNP mode.

11.  It would be nice to not only be able to block rogue sites and avoid rogue hubs, but to block hubs/peers who are connected to such machines.  If we make a new protocol, it should be able to pass on information about where a hub is connected and so on, and if it is connected to blacklisted hosts, rogue versions, and hubs which act suspiciously.  In other words, a virtual quarantine.  If your neighbor connects to a bad host, then your client software should automatically dump them, or better yet, warn them or cause them to dump them.  But I see a potentiial flaw with causing a client to dump peers...a malicious version of the software could force everyone out of your peer list.  If warning packets are used, then I would also give the option to override them in case the feature gets misused.

12.  There should be built-in detection and rejection of absolutely impossible addresses, ports, and connections.

13.  There should be encryption between the clients and the hubs, and between peers.  That may help with nosy ISPs.

14.  In addition to the final stage filter, why not a filter like that at each hub?  I mean, before forwarding results, make sure they are the results requested.

I am sure there are more points.

Offline Bluey_412

  • Forum Member
  • I'm Watching...
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #1 on: April 11, 2012, 03:23:55 am »
with regard to Point 3, and Point 14, the only problem i see is that under normal circumstances, results may often seem unrelated, with the text entered in search not contained anywhere in a file name, but obviously, the search is also indexing metadata tags, like mp3 tags, so the files are suitably listed.

Say for example, I do a search for (Shudder) Eminem, and among the search results i see 'Stan.mp3', with no mention of Eminem in the file name. The search is obviously reading either the full pathname (C:\music\eminem\albumname\stan.mp3) or it is reading the mp3 tags, which includes the artistnam, song name, album name, track number etc info

A filter as suggested might not list such a file because the filename does not contain the search term 'Eminem'
What you think is important is rarely urgent
But what you think is Urgent is rarely important

Just remember that...

Offline Plum

  • Core
  • *****
  • ***
  • I love WinMX!
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #2 on: April 11, 2012, 03:31:53 am »
15.  Check for duplicates.  Highlight the results if their hashes are already in the library.
16.  Separate throttles for each aspect of the experience.  Not just uploads and downloads, but search bandwidth and other features.
17.  Maybe flood prevention, such as determining if a host is returning too many results too fast.

Offline Plum

  • Core
  • *****
  • ***
  • I love WinMX!
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #3 on: April 11, 2012, 05:14:58 am »
with regard to Point 3, and Point 14, the only problem i see is that under normal circumstances, results may often seem unrelated, with the text entered in search not contained anywhere in a file name, but obviously, the search is also indexing metadata tags, like mp3 tags, so the files are suitably listed.

Say for example, I do a search for (Shudder) Eminem, and among the search results i see 'Stan.mp3', with no mention of Eminem in the file name. The search is obviously reading either the full pathname (C:\music\eminem\albumname\stan.mp3) or it is reading the mp3 tags, which includes the artistnam, song name, album name, track number etc info

A filter as suggested might not list such a file because the filename does not contain the search term 'Eminem'

I was meaning items which matched none of the above.  During the flooding, all possible results were sent, not just hits with it in the metatags nor path.  It is impossible for every single file to be in every possible path name and have every possible metatag all at once (possible, but too much work for the spammers).

Now let me say this, returning things from the metatags is a poor practice.  Why?  Lets say I have some hate speech I want everyone to download.  So what I do is take an MP3 editor and put all the most popular groups, titles, and keywords in the various ID3 tag fields.  Would you really want to include ID3 tag searches when anyone can edit them to force them to come up regardless of what you search for?  In addition, as a heavy MP3 collector, I find that a lot of the internal tags of the files out there are incorrect.  So are the filenames for that matter.  Like how much "Weird Al" stuff is really by Weird Al?  Likewise, if someone is stupid to put stuff related to one thing in a path for something else, would you want the results?  Depends I guess.  In your example, stan.mp3 might have nothing to do with Eminem, and someone put the file in that path by mistake.  What if I were to install Windows in the Eminem folder and include my system directories in the library?  Would you want that?  Have fun with a **** load of .DLLs and .EXEs! ;-)

Now you bring up a good point.  My method is still valid, but it should be adapted.

a.  The protocol could be modified to separate the various types of results.  Like separate name hits, path hits, and metadata hits.  Or just full qualified name and metadata hits.  Options could be added as to what types of hits are allowed.  If one is compromised, it can be disabled (compromised name hits can be easily verified on the client machine).  Then results sent marked as name hits could be checked, since name hits are the most abused it seems.  WNP already returns path names (doesn't display by default), so it is easy to see if a result has the term in the path or name.

It would be impossible to check the validity of a reported metadata hit from your own machine without downloading a sampling of each file which is impractical.  I guess we could give the option of doing a peer to peer download (not taking the hub's word) of just the metatag portion and verify it that way.  That would be only after displaying the file and only on demand, not an automated process - (too intensive to do as routine, but less intensive than downloading crap.  I am not even sure if WinMX even uses metadata in search results other than using the sampling parameters and length in exclusion/inclusion filters.  But it might, and if it does, we should duplicate it, maybe with a switch for advanced users to disable.

b.  The hubs could be better prepared than the end clients for determining the validity of the non-name hits.  But if you are connected to a rogue hub, then it cannot be trusted.

c.  The final stage check should be just an option.  While it could block metadata searches, it could also make the network more usable in a time of flooding.  Some may like that protection all the time, while some would rather have more results.  But more results is only good to a point.  We are getting too many right now to the point whether preventing metadata searches or not is of least importance if you cannot find anything.  Also, the final stage check should be optional for each supported protocol, so you can enable it just for the more abused ones.

d.   In connection with item 10 (original post), a new client or hub could send out searches that cannot possibly exist.  If there are hits, the machine sending those could be blacklisted, or at least session-ignored.  If they send things that could never exist, that means the machine is dynamically creating spam with whatever name entered.  That is mostly a G1 problem, not a WNP problem.  But spammers could eventuallly duplicate attack styles across networks.

e.  Similar to d, pattern matching blocking should be an option.  Ever get results like these?

Britney Spears Activation Patch.exe.mp3  (Huh?  Britney Spears is not a Windows version.)
Britney Spears Crack.exe.mp3 (What is there to crack in an MP3?)
Gospel 10-year-old having sex with animal.avi  (Gospel is the search term.)
Adventures in Odyssey (XXX Amateur Sex Videos Getter).torrent
Adventures in Odyssey (Uncut Radio Edit).mov  (Huh?  Uncut, and radio edit, and video, and Chrstian themed?  Spam!)
Lady Gaga [valid til nov 2011].rar

Anyway, these are fake files using name templates.  The files might be real (viruses, ads, porn teasers), but were named on the fly based on what was searched.

f.  I would include exact phrase matching as an option too, and check that against the results.  Otherwise, type in "Coward of the County," and get "Of the County Coward [live version].mp3" and it be the same length as the other bogus results.  Again, that is more of a Gnutella problem.

Offline Bluey_412

  • Forum Member
  • I'm Watching...
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #4 on: April 11, 2012, 05:28:41 am »
Other discussions in the past have come to the conclusion that fake search results are happening because when you search for 'XYZ criteria' a rogue device on the network (Likely a compromised Primary) converts all search requests to C:\*.mp3 or similar, so that all the results are actually the contents of all share libraries stored on all user's C: drives. If files are in any drive EXCEPT C:, they are not returning in the garbage listing

heres a little experiment i tried:

Disconnect from the WPN, then connect, as Primary, using the controls on the Networks page of WinMX

Have a suitable search name loaded in the search section, and as soon as P=1, hit search and watch the results. When P=2, hit Search again, and watch...

Keep doing this for each subsequent extra Primary that connects, all searches will seem normal until, mebbe at P=6, or 7 (or 2) wham, here comes all the garbage

Theres your compromised Primary! (It could also be a secondary)
What you think is important is rarely urgent
But what you think is Urgent is rarely important

Just remember that...

Offline Plum

  • Core
  • *****
  • ***
  • I love WinMX!
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #5 on: April 11, 2012, 06:37:45 am »
That was roughly what I was saying, except for the C: drive part, but more concise.  I think it is a primary (hub or superpeer using Gnutella terms). It needs to be a machine that is well connected, and probably a cluster.  I've had extraneous results starting at one.  While everyone doubts the caches are involved, it makes you wonder when the first couple of peers triggers this.  However, I disagree with your assessment that the machine that connects at a specific point is the poisoned one.  I have a feeling it could be up to 5 degrees separation.  That machine could just be a messenger and connected to another machine, hence the idea about "warning packets."

We could probably make a good protocol implementation which would dynamically reroute itself when under such attacks.  Just teach it to know what an attack "feels" or "smells" like.  For instance, use my double filtering idea (against the pathname since the network sends that) as a weighted percentage and using that as a criteria for dumping a node.  While some hits could be only in the metadata, all would not be, so if none of the qualified pathnames (path+filenames) returned have the search terms in it, that would be very suspicious.  Sure, that may be possible if you only have a handful of files named Track01, Track02 etc., but not if you have hundreds.

On the earlier discussion, not all files would have metadata, and thus could safely be searched by name (including path) alone.  Compressed media formats would (.mp3, .wma, .ogg), but raw ones usually don't (.snd, .wav, .voc) other than sampling parameters.  The compressed files don't have anything easily readable, but they can be indexed and specific files extracted and examined if they exist (.diz, .sdi, read.me).  Only some .EXEs have a metadata resource section, though you can find a lot if you know what to look for (I once tried writing code to determine what language the executable was written in - different compiler libraries have their own signatures).  Image files vary.  GIFs, PCX, and bitmaps would not (other than size and dimensions), while JPGs, TIFFS, and maybe .PNG often do.

You did give another idea on how to partially mitigate this.  What if we could get all the users to move their fiiles to another drive or partition, or use drive management and change letters?  The trick would be getting everyone to do that without alerting the attackers.  Obviously, using only OpenNap gets rid of a lot of crap, but some things may require WNP, so I enable when I have to and then disable.  Peer block seems to have little or no effect on the flooding, though it might put a slight dent in the different types of attacks in the other protocols.

I found that in Shareaza, there are two effective work arounds for the fake results (those truly are fake and created as needed - and the disturbers have had more experience with the G1 protocol).  One is my idea from the first post.  Simply use the additional filter box at the bottom.  Like I search for half of the term and filter the other half that the spammers cannot know in advance.  Only what I want would fulfill both the public and private requirements. There are many variations of the final stage filtering idea.  Another is to search for what you want minus a letter then filter with the letter.  It would be nice if that could be combined into a single step as far as the user is concerned.  The other effective work around is to disable the G1 network and just use the rest.  The modern Gnutella clients can also use G2, and there are the other protocols, so little significant loss, theoretically speaking.

Offline GhostShip

  • Ret. WinMX Special Forces
  • WMW Team
  • *****
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #6 on: April 11, 2012, 08:23:40 am »
What network do you hail from Plum ?

I ask this as many of the excellent ideas you have put forward are already implemented in winmx and some of the suggested  improvements will already be in the new client,  there are labour intensive ways to track the rogue primaries but a protocol update does seem the way ahead and was the preffered route, winmx already features encryption in all areas bar the client to client file transfers although it may now need modifying due to being widely published, files can be matched using a lightweight hash and searched for by metadata criteria in winmx and the new client already features a matching routine that checks whats been requested with what was delivered.

I have to roll to work so I'll cut this post short but I,m very pleased you seem to be thinking of fixes rather than problems, a most refreshing change and a welcome one  8)

Keep up the positive and helpful dialogue  :-D


Offline Plum

  • Core
  • *****
  • ***
  • I love WinMX!
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #7 on: April 11, 2012, 08:32:10 pm »
What network do you hail from Plum ?

I ask this as many of the excellent ideas you have put forward are already implemented in winmx and some of the suggested  improvements will already be in the new client,  there are labour intensive ways to track the rogue primaries but a protocol update does seem the way ahead and was the preffered route, winmx already features encryption in all areas bar the client to client file transfers although it may now need modifying due to being widely published, files can be matched using a lightweight hash and searched for by metadata criteria in winmx and the new client already features a matching routine that checks whats been requested with what was delivered.

I have to roll to work so I'll cut this post short but I,m very pleased you seem to be thinking of fixes rather than problems, a most refreshing change and a welcome one  8)

Keep up the positive and helpful dialogue  :-D

Thank you!  Actually, I mainly use WinMX, but I've used the others which are mostly Gnutella derivatives.  I've used Gnucleus, Shareaza, Frostwire, DC, DC++, and the older ones back when they were working and allowed (like Kazaa).  I even used the original Shareaza which I liked better, since it used the Kazaa network, but without the limits.  Since I have used most of them, I tend to have picked up the Gnutella lingo.  One of the minds behind Gnucleus invented the G2 protocol.  He wanted something scaleable and modular, where different clients could use it and they could offer different features and still be compatible.

Most of what I shared is not in WinMX now.  I know it almost like the back of my hand, as far as the user experience goes.  The first 2 items are WinMX bugs, and a simple DLL patch cannot fix either.  Check the date and time on the settings.dat and the disk accesses.  It is being written to every few seconds no matter what.  If you have WinMX running, the time on the file will always be the current time.  The other bug I learned about the hard way.  I've lost files and developed disk errors just by WinMX crashing.  So the new client needs to flush them regularly.

I know that WinMX lacks 64-bit code and multiprocessor/multicore/SMP/hyperthreading support. On a Core i7 39030, WinMX only uses one of the 12 available threads.  If possible, you should give the search capability its own thread.  Sometimes it pegs the processor (the particular core used), and so does deleting a lot of files or clearing a lot of entries.  So keep those high powered options out of the same threads as the GUI and file transfers.  That way, a stall will resolve faster and won't cause a user to think it is locked up, and even if the search thread or file management threads stall the CPU, the user will not perceive a loss of functionality.  I don't know if SSE or similar are used.

Most of the rest are not in the current client as I mean them, except the compression.  It probably needs to be changed because of the leaks.

I know for a fact that my "final stage filter" is not in WinMX now as I meant it, or the current flaw would not be as handicapping for those searching for files.  The experience may be slower than usual when under attack, but usable.

As for updating the protocol.  Sure, lets do that.  But I still think the other ideas should be added too.  Just because it may seem immune to certain challenges, the challenges should be taken into account.  So I'd still use the end stage filter I proposed and give configuration options.  Separate packets for pathname hits and metadata hits was my idea of a work around so only the pathname results would be checked against the original search.  Yes, that weakens the protection, but than can be mitigated by adding the ability to block metadata results in case that gets abused.

The idea I suggested to the Gnutella team was to send the query minus a letter to the network and have a local filter the check the entire search against the results, snubbing the superpeers/hubs involved in persistently sending only truncated results.  Not necessarily a ban in case of false fingering, but ignored for the session or at least a half hour.  If you want to get more sophisticated, you can keep a database containing all the times machines were bumped for sending irrelevant his.  Then the ones bumped the most could be sent to sort of a global blacklist.  But I see a possible flaw in that.  Rogues can reverse the process and cause the network to bump all the good clients.  You have to be careful that any security measure cannot be used against us.  That is the problem with using a reputation system, since rogues can poison the reputations of the innocent.  That is the problem with Democracy in general, as good as a concept as it is.  If you give everyone a voice, you are also giving bad people/nodes a voice, and who is to determine who is bad?  You sometimes have to think of networks and protocols in terms of political science.

Likewise, adding simple rogue detection to the end client is still good.  It is simple to send out an impossible search to each machine that connects (maybe keep a small local cache for recent future dealings with the same machines).  But I see a problem...  What if the disturbers discover which secret test phrase is used?  They will simply detect & ignore it, and then spam the future searches.  If we make this open source (bad idea), then everyone would know it.  So that option to session ban searches which return hits on the "impossible" search would have to be user configurable.  So there would be too much diversity for the other side to predict to code it all out.  But it probably should be off by default (no need to send out more junk traffic than necessary).

I also think we need private forums which cannot be searched.  The inner workings need to be kept to a trusted circle, and their true identities kept as anonymous as possible.  We should avoid the limelight to avoid being targets, and to buy more time of enjoyment by obscurity.

Offline GhostShip

  • Ret. WinMX Special Forces
  • WMW Team
  • *****
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #8 on: April 11, 2012, 09:52:28 pm »
That all seems sensible logic to me Plum, where have you been hiding out  :lol:

Tbh the security side of things take care of itself in the main, ideas regarding network security mechanisms are always weighed again possible abusive "friendly fire" usage and I agree many of the client side anti abuse mechanisms need to be robust in case the fancy network side stuff fails, as has occured currently.

The multiple threading of the client is being worked on but the key priorities are actually creating the working code for each of the features, leaving optimisation and clean ups to be pencilled in after that as we have no way of predicting whats a mem hog in the final code build till we have it completed, but it has been on our minds of course and some portions are threaded already.

Most of the folks I have discussed the matter with have not looked favourably on any of the reputation systems as they all seem wide open to abuse, be if from plain users to malicious developers and we face both here, thus as a mechanism thats network protocol based I cant see such systems being successfull.

We do take security seriously here and the developers of the new client have their own private areas off site that are known to only those working on the project, this has ensured security has been absoloute however as you know a decent reverse engineer can discover much so any new client protections must be implemented in all new clients and designed to impede abusers by enforcing strict implementation of network controls, however as you say the less said the better  :yes:

Offline Plum

  • Core
  • *****
  • ***
  • I love WinMX!
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #9 on: April 12, 2012, 02:19:37 am »
GS, Thank you for your reply.  I tend to agree with all of that.  Are there places where I can safely bring up protocol, client, and interface ideas?  I've been around for years but never signed up.  I sort of had a less than perfect experience in a Gnutella group.  I didn't feel heard.  They shot down my search+filtering idea, either because they misunderstood it, because they thought it would take up more bandwidth, or because they thought it would simply give the attackers a reason to come up with worse attacks.  Well, I don't buy that latter excuse because if that were the case, then why lock your doors and windows?  Burglars will just break them anyway, right?  The truth is that locks really only keep the honest people honest and make it harder for novice burglars.  If you are particularly a target, like if you are known to have lots of expensive things or money laying around, they will get in.  Ironically, the more security you have past a certain point, the more determined the burglars will be, since they then know you have things to protect.  So knowing "they will get in anyway" is no reason to not lock your doors.

I diverged, but the Gnutella team had a defeatist attitude, and I limited my involvement in the behind the scenes stuff partly because of that.  I had a way to reduce the perception of some of the most common spam types (the spammers were free to keep on and would notice nothing different on their end, but the end clients would filter the most common types), but they weren't ready for it.  It is one of those things that is a client-side enhancement (could work at hub/superpeer/primary level to between forwarding requests, I assume) that is fairly network and protocol independent.  It would be a higher level enhancement and can supplement the lower-level stuff, like the search functionality itself.  I think they should be somehow paired.  The clients need to stop being naive and trusting that what they ask for is what they will get.  You don't walk certain streets without some sort of protection, and the same goes for the digital world.

I agree about most of the reputation schemes discussed being open to abuse.  That is a discussion that comes up across the different protocolsand networks on occasion.  The Gnutella team came to the same conclusions.  If you use reputation systems to control who gets the most bandwidth or file access, then end users have incentives to manipulate it and apply hacks and patches to their own files out of greed.  In fact, that has already been done to get around ratio blocks in various clients.  All they do is show files and slots, but refuse access to the files.  So they can pretend to be a sharer without sharing anything.  That is probably part of the reason why some DC nodes refused DC++ connections, because DC++ might be open source, lending to compromised software.  If someone wants to take without giving, they could modify the software to report files without making them available.  There might even be a WinMX add-on like that, and I think I remember seeing weird client behavior along that line, though most of the time, it could be someone with a misconfigured firewall/router.

Then of course, reputation schemes can be used by the abusers (corporate types, spammers, and disgruntled types) to either make themselves look better, or the most prolific of the legitimate sharers look like spammers and non-sharers.  All they have to do is set up a handful of modified Primaries to constantly vote for each other and against unrelated machines not on blacklists.  That would poison the feature and make it do the opposite of what it was intended to do.

Offline White Stripes

  • Core
  • *****
  • ***
  • Je suis aimé
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #10 on: April 12, 2012, 03:01:36 am »
Quote
Create some way to make OpenNAP stuff support swarming and for the WNP and OpenNAP transfers to be more compatible.

this has probably been answered but opennap can 'swarm' ... with itself that is... the lopster opennap client already does this.... mixing opennap xfers with wpn xfers would be a bad idea tho since those are two different networks with two different hashing systems and it would be unfair to the users of other opennap clients...

Offline Plum

  • Core
  • *****
  • ***
  • I love WinMX!
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #11 on: April 12, 2012, 05:22:51 am »

this has probably been answered but opennap can 'swarm' ... with itself that is... the lopster opennap client already does this.... mixing opennap xfers with wpn xfers would be a bad idea tho since those are two different networks with two different hashing systems and it would be unfair to the users of other opennap clients...

How would it be unfair for Open Nap clients if someone is connected to both networks and getting some of the files filled from WNP?

I haven't figured out how to get swarming to work within ON.  I mean, if you select 2, the second will give you an overwrite prompt or whatever.

Offline White Stripes

  • Core
  • *****
  • ***
  • Je suis aimé
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #12 on: April 12, 2012, 03:43:44 pm »
mx cant swarm on nap so the overwrite prompt is all you will ever get...

the unfair part comes from the uploads... the nap client user only has one route whereas the mx client user would have two...

Offline steed_and_emma

  • Forum Member
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #13 on: May 04, 2012, 02:56:10 pm »
Any idea of how far away we are from the release of the fixed WinMX?

Steed

Re: My Thoughts and Ideas About WinMX and New Client
« Reply #14 on: May 05, 2012, 01:15:13 am »
nothign quantitative at the moment.
My Understanding is that the coders are working through the primary/secondary links at the moment.
A partly finished pre-alpha has been released to a small group of testers.
The last one I saw was able to search, chat, get a room list and connect as primary to the WPN.
It couldn't connect as secondary or accept incoming secondary connections.

I hope this give you some slight clarification as far as where it is up to.

Offline Will

  • WMW Team
  • *****
  • *****
  • ***
  • It wasn't me
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #15 on: May 05, 2012, 01:51:33 am »
It can accept incoming secondary connections but it's limited so it was disabled for that build :yes:

Offline wonderer

  • MX Hosts
  • *****
  • ***
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #16 on: May 05, 2012, 09:30:39 pm »
how many drops of water are in the ocean?

Offline achilles

  • Core
  • *****
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #17 on: May 05, 2012, 10:45:55 pm »
Even more than the national dept of the United States.
I'm a Hardware, and Cyber Security Guy.

Offline White Stripes

  • Core
  • *****
  • ***
  • Je suis aimé
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #18 on: May 05, 2012, 11:18:29 pm »
...considering the variances in temperature and salinity (and the effect those have on surface tension) you would have to first define 'drop' ....

for those curious about the new client... yes it works... sorta... secondary connects and the xfer of files are still missing... and as always, murphy is riding shotgun....

Offline wonderer

  • MX Hosts
  • *****
  • ***
Re: My Thoughts and Ideas About WinMX and New Client
« Reply #19 on: May 06, 2012, 08:35:58 pm »
oops
locked myself
ignore this post

WinMX World :: Forum  |  WinMX World Community  |  Winmxworld.com Strategic Directions  |  My Thoughts and Ideas About WinMX and New Client
 

gfxgfx
gfx
©2005-2024 WinMXWorld.com. All Rights Reserved.
SMF 2.0.19 | SMF © 2021, Simple Machines | Terms and Policies
Page created in 0.025 seconds with 26 queries.
Helios Multi © Bloc
gfx
Powered by MySQL Powered by PHP Valid XHTML 1.0! Valid CSS!