WinMX World :: Forum
WinMX World Community => Winmxworld.com Strategic Directions => Topic started by: Plum on December 30, 2013, 04:41:42 am
-
I think there needs to be a topic just for new client bugs. I just tried it, here is what I found:
1. In the settings, a number of the check boxes do not work.
2. In the settings, there are some screen drawing problems, screen corruption.
3. There is no place to specify incomplete files.
4. Incomplete files placed in the download directory will not load.
5. The search filters don't work. Extension and bit rate filters do nothing.
6. Inability to stop the search.
7. Inability to select multiple files.
8. Inability to disable the download confirmation.
9. The downloads don't actually start.
A feature I'd like to see are search preset profiles like in WinMX.
The "filter" I recommended would be nice. It would be good to have code to make sure what you search for is only what you get. It should be at both the protocol level and the user interface level. One idea I had for the UI level was do the actual search across the network one letter short and then only return what matches what they user actually searched for. For instance, a user puts in Britney. The UI passes "Britne" to the client/server search code. Then the UI matches the returned results against "Britney" and returns only those. So what happens if there are fake file generator peers is that they return the short search results, distinguishing them from the results the user wants. So it would return "Best hits of Britne 2014" and "Britney Spears - One more time." Since "Britney" is the original search, the one with "Britne" would then be filtered as not matching. (And give the option to disable this refinement.) And, to refine this further, you could have the option to do a literal, exact search (again do a short search), and anything with more words than the exact search is not passed to the UI.
-
there is a topic on the forum for ourmxworld for bugs, check there.
http://www.ourmxworld.net/forums/bug-reports
-
Incomplete files search should be done in the lower box and there is code there to check for such entries, currently the files are downloaded in a different format than the WinMX format however the "Incompleted" format we all know is to be added soon, the files still arrive just not in the same style.
https://www.winmxworld.com/tutorials/winmx_incompleted_files_information.html
The bitrate and file type methods should work I will get those checked out.
The search stop should work fine but once again I shall check this out.
The download and upload messages have been removed in the beta 2 that's not yet been released
In the main the network attacker does not care what you search for he and yes I,m sure its a he, simply sends as many results as he can based on a known flaw in the WinMX results handling component, OurMx does filter out by keyword so I would be interested in seeing some screenshots of what your seeing.
I haven't updated this much yet but here's the list of bugs fixed so far that I have had time to document.
https://www.winmxworld.com/tutorials/ourmx_client_updates.html
The elves are still working and we are also looking to add in OpenNap at some stage once we have gathered enough data to do so.
-
I think it would be a good idea to keep the bug reports here
as there seems to be very little interest at ourmxworld.
I think users check if winmx is working or not and
have little interest in ourmx as it is so poor functionally.
This is reflected in the queue I had over the last few days
when winmx was not being attacked and proves there are still
users interested.
As I have said many times the progress
with the new client is painfully slow and I know some friends who
have simply given up so it needs to move a lot faster
and progress needs to be seen :)
-
I would like to add that there needs to be some communication
on the old winmx where is says checking for updates.
There is not even any information about ourmx which I find staggering :no:
-
The decision to announce across the network the existence of a new client will only be made when its able to support the whole network as otherwise small but critical bugs may be launched across the network and cause a lot of carnage and that's something that we cant afford but I too am pleased to see so many folks are still holding firm and spending quality time with their friends despite serious efforts to prevent such freedom of speech.
Much as I too want things to speed up its all down to how many folks put into the pot, its unrealistic to expect a few folks to do all the work continuously and then ask why its taking so long, lots of folks could be helping in different ways but theres little appetite within the community to lead by example and so we all have to wait ages to see anything good come to fruit, this is the reality of the situation and whilst whats been said here is not ideal its accurate and honest.
-
The bug reports for ourmx are in the bug report forum on ourmxworld.
Poeple have done a great job there of identifying and providing information on bugs they have found.
Tutorials and other information relating to ourmx will be available as they are completed.
-
Tutorials and other information relating to ourmx will be available as they are completed.
Good to hear Silicon 8)
Like myself Silicon has been looking at the bigger picture in all of this and the need to suport the new client when its able to walk for itself, it is a pretty large project already so to have it here was not seen as a helpful long term objective and maybe even a hindrance to creating further projects here in the future, with this in mind OurMxWorld will be the key area for all developments on the new client and WinMxWorld wil continue to keep a watching brief on it and other WPN related projects that appear in the future.
There is obviously a transitional period for all of this to happen smoothly but thats the long term strategy discussed between myself and Silicon some time ago and is a route I agreed was of benefit in the long term.
I hope you all see the benefits in having a place thats focused on supporting the new client and its users and the ability therin to create a core group of OurMx specific users/developers to support the project way into the future when some of us older users have ... well, gotten old..lol
Happy new year to you all in advance :-D :-D
-
Incomplete files search should be done in the lower box and there is code there to check for such entries, currently the files are downloaded in a different format than the WinMX format however the "Incompleted" format we all know is to be added soon, the files still arrive just not in the same style.
https://www.winmxworld.com/tutorials/winmx_incompleted_files_information.html
The bitrate and file type methods should work I will get those checked out.
The search stop should work fine but once again I shall check this out.
The download and upload messages have been removed in the beta 2 that's not yet been released
In the main the network attacker does not care what you search for he and yes I,m sure its a he, simply sends as many results as he can based on a known flaw in the WinMX results handling component, OurMx does filter out by keyword so I would be interested in seeing some screenshots of what your seeing.
I haven't updated this much yet but here's the list of bugs fixed so far that I have had time to document.
https://www.winmxworld.com/tutorials/ourmx_client_updates.html
The elves are still working and we are also looking to add in OpenNap at some stage once we have gathered enough data to do so.
I am not a newbie, and thus never ask for help nor ever ask on my own behalf. I exist to share and teach to others. I never mentioned searching for incomplete files. "Specifying" is short for "specifying the paths." There is no setting to tell where to load nor save incomplete files. I would assume the download location if nowhere else, but it refuses to load the ones I already have from there. But thank you for telling me the format won't be compatible.
You still don't get the idea for MY type of filter that works with most abuse that I've been teaching and lecturing about for years. Each time I bring it up, you fail to realize I AM TEACHING YOU - NOT THE OTHER WAY AROUND. Each time I bring it up, you keep forgetting what I said about it before. It is a revolutionary method, everyone is too proud or stupid to use it. I think I will apply for a patent since nobody in file-sharing will take me seriously.
What you said about the attackers not caring was my whole point. I just gave instructions on how to build a filter that will find the gibberish bots using the strategy to turn any string into a fake file against itself. Again, for the dense, what my proposed filter protocol does is this:
Imagine the code divided into two sections. You have the UI (what the user sees) and the network portion (the invisible part that interfaces with the network). It is possible to refine results from both places. Without my filter, the user search is passed directly to the network code. Lets say without the filter you type in Britney, you would usually get stuff like:
Britney amateur porno.mpg
Britney best hits 2014.mp3
Britney Spears - Hit me baby one more time.mp3
Britney crack.exe
Britney no DVD patch.zip
Britney Spears - Oops I did it again.mp3
Now if you searched for Britne, you get:
Britne amateur porno.mpg
Britne best hits 2014.mp3
Britney Spears - Hit me baby one more time.mp3
Britne crack.exe
Britne no DVD patch.zip
Britney Spears - Oops I did it again.mp3
Notice the difference? Since Britne is part of Britney, the real Britney spears stuff is found too, but the fakes return as Britne, not Britney. So you see? We tricked the fake file bots into telling on themselves. Now what if we automated that process? My filter would do the search ONE LETTER SHORT of what the user asks for. So here is how to implement this. Design the UI to remember the real search and search for one character short. Then send that to the network code. Network code relays the incoming data to the UI code. There the UI code compares with the original search before displaying. Sure, add the ability to turn this off. No, it should never be relied on as the main fix, but added as preemptive protection, EVEN IF there is a reliable fix at the protocol level. I would add this extra layer of filtering regardless. So that is the beauty of it..... the fake files will be created with the shortened name, while the genuine ones will be returned with their authentic names regardless (since a shorter search returns more results).
And my suggested refinement would also work, even with the network as it was. Both the UI and the protocol would do the refinement, so even if the hacker blasts through the protocol with garbage hits, robust UI code could then still restrict things to what was actually searched by discarding additional hits. There should be an option for an exact search, but only applied on a per search basis, not as a default.
-
Plum old chap I understood your theory the first time but as you may have failed to appreciate I try to pad out my reports as best I can when questions are asked to cover as many areas as I can when it comes to the new client topic, thus I answered in a way that would deliver the most information in the shortest space of time to the most folks.
Your idea is fine, I think its a fine idea if that cheers you up :-D
Now I,m not sure why you sound angry but theres no need to be, we are all friends here and all good ideas are taken on board and implemented when we have the time to do so, I hope that's good enough for you and of course thank you for sharing your idea and have a happy new year 8)
-
Plum old chap I understood your theory the first time but as you may have failed to appreciate I try to pad out my reports as best I can when questions are asked to cover as many areas as I can when it comes to the new client topic, thus I answered in a way that would deliver the most information in the shortest space of time to the most folks.
Your idea is fine, I think its a fine idea if that cheers you up :-D
Now I,m not sure why you sound angry but theres no need to be, we are all friends here and all good ideas are taken on board and implemented when we have the time to do so, I hope that's good enough for you and of course thank you for sharing your idea and have a happy new year 8)
Thank you for your understanding and explanation. Happy New Year to you too! It might already be the new year for you, but I have about 4 more hours. I guess I feel like I am talking to brick walls at times. I've been to a number of open source projects where everyone has an attitude or treats others like newbies when they are not. I ought to learn C. I used to code in QuickBasic and assembly under DOS. One other project I keep tabs on is ReactOS. I just wish they could get it to run on most real hardware (within reasonable limits). Anyway, take care and enjoy your new year!
-
Thank you In return, Happy new year Plum and to the rest of the fine community here I wish you all a great 2014 and look forward to better times and more fun :-D :-D
-
Sounds to me like PLUM's idea/tool has merit, could be offered as a plugin...
The thought occurred to me too, that while folks are grumbling about OurMX, they are comparing a new, first-gen development against the much loved V3.54b4.
When OurMX finally gets up to the level of a V3.5.x.x, I am sure that it will be just as polished, and functional, as WinMX, but without the vulnerabilities
Be fair and patient, guys
-
One of the benefits of having a "live" development is that there is no need for "plugins" etc all of the features folks want can be added and offered as options to turn on or off as they choose.
As always thank you for your kind words of support Bluey, the development team will reach a stage where they can announce the opening of the client src and thats when those who wish can assist in adding or improving further features that have been missed or they have not had time for and I hope that sits well with the community and for the long term future we all look forward to.
We can at least boast that we have always as a community turned our dreams into reality by sheer effort and whilst we are able to do that we shall not falter :-D
-
Sounds to me like PLUM's idea/tool has merit, could be offered as a plugin...
The thought occurred to me too, that while folks are grumbling about OurMX, they are comparing a new, first-gen development against the much loved V3.54b4.
When OurMX finally gets up to the level of a V3.5.x.x, I am sure that it will be just as polished, and functional, as WinMX, but without the vulnerabilities
Be fair and patient, guys
Yes, you have a very valid point. People are comparing the new client to the old one. But it is a learning curve for the developers I would assume. My main problem is that it won't start downloads for me.
And I hope in future ones, you can select multiple files at once, and do so without a confirmation. I'd love to see it take advantage of multithreading to where the search code is independent from the UI code, thus reducing the illusion of it hanging during huge searches or restarting many files. On WinMx, while multicore PCs helped overall system stability when running it, and allowed you to watch movies, listen to audio, and play games at the same time, there were still times when WinMX would deadlock a single core for several minutes. It seems if it was split into threads and isolated, the program would still seem to be functional and not alarm the users.
Yes, vulnerabilities is the important part. Included in that is helping prevent both main problems - spamming real but unrequested files (mainly with WinMX) and spamming fakes (mostly a Gnutella problem). A double check at the UI level to compare the original search with each result would reduce the unrequested real files attack. However,an active, deliberate attack would still consume network and CPU bandwidth, though such UI level filtering would help reduce CPU usage and help prevent out of memory problems. For fake files, the UI level filtering would help if coupled with a method to smoke out the spam makers. Again, bandwidth would still be consumed. However, ideally, a robust protocol should be the first line of defense. What I liked about WinMX before it was disassembled was that it had less fake files and better chances of finding what you wanted.
Being able to easily move peer caches is good for future proofing, and that is a nice feature that already exists. That is what had to be hack-fixed on WinMX after the domains were surrendered.
I'd also like to see file extension verification. Codewise, that is not too hard to do. For instance, ,rar files start with "RAR!" in their main header, .zip might have PK at the end, .wav and .avi files contain RIFF in their header, executables start with MZ at the beginning of the file, etc. Anyway, the problem is when there are files with the wrong extension. It would be nice if the network was intelligent enough to find those and rename them. I once was downloading Windows, and was unknowingly downloaded a movie in a language I didn't even speak. Someone renamed this AVI file to have the Windows name and a .EXE , .ISO, .ZIP, or .RAR extension. Had I known it was really an .AVI, I would have never downloaded it.
While OurMX doesn't have the .INI hammering bug (constantly saving the .INI) I saw in WinMX, I don't know how well it handles saving downloads, since right now it doesn't even add them to the download window nor start them. But on WinMX, there was one serious problem here - it never flushed files to disk unless you deliberately stopped the file or deliberately exited the client. That was a problem since, if the program ended unexpectedly or a lockup or reboot occurred, the file would have to be downloaded from scratch or from the last incomplete download point, not from where it was when the crash occurred. In addition, disk repair would need to be ran (lost clusters). What I feel should be done is for the file handles to be periodically flushed during downloads (particularly for huge files).
One feature some might want that I have mixed feelings about would be multiple protocols. That introduces new issues, such as vulnerabilities from the other protocols and cross-protocol leeching (ie., one-way support for other protocols is just bad practice and unfair). That is why in multiple protocol clients, I usually disable the "G1" protocol. That is the original Gnutella protocol, and for a long time was the one used most for fake files. They may have moved that activity to G2. Implementing it would not be too hard, since there is plenty of open source code out there for that, and to make it easy, there is in GnucDNA, a .DLL library for client designers to use which contains the entire Gnucleus protocol code. The advantage of that approach is the ease of updating - just replace its .DLL file, thus no need to rewrite the client just to include a protocol patch. Anyway, I've ran Gnutella-based clients and WinMX at the same time if I had difficulty finding certain files.
-
Speaking of features. For a while, I used Leech Hammer or whatever. It would be nice if the features of it were in the client. In addition to stopping dowloaders with no files nor slots, it added other features such as being able to block Unicode results. I don't speak any Asian languages, so why should I see them in my results? And the reverse would be true for Asian users, and so why should they see Latin languages if they cannot read them? But that would only block them at the UI level, not restrict for anyone else, so the client could very well be sharing or forwarding files outside of one's own language, but that particular user would not seen them as results. I have another possible idea there, and that has to do with network affinity. Like if you specify the languages you speak and the other clients are told that, then that could be used to return/forward only certain results, or even use that data as a reason to disconnect from a peer. That might be better than selecting by geography (other then the ping rate and latency issues).
-
The downloading side of things should work ok on the secondary side Plum, its been knobbled in the primary side while an array mechanism is perfected to handle the further requirements of primary transfers.
Whilst not ideal It would be simple to add the filtering of non local languages by utilising the region selection entry and adding an extra packet field to the search and result mechanisms to parse "out-of-region" results from the mass. I would also have suggested the Language file selection (new in beta2 ) as something to add but until we have more translations to hand that's pointless atm.
I have investigated the G2 protocol only recently to compare some ideas that I had for a new anti attacker mechanism and it seems my idea was similar to the one they use to prevent "traffic amplification attacks" which we all know by a more common name.
The improvements I have pencilled in at this time all involve modifying the TCP network header to add further technical traps to replay attacks and unfortunately this means we will need to diverge from the main network at some stage, perhaps the usage of some new packets and the ignoring of some of the older ones can leave us with a partial compatibility level but its been known for a long time that something has to "give" to clear out the anti p2p rats who try their best to censor this network.
On the issue of MxMoni style tools I can say that although not too hard to add given the whole control system is open to adjustment its not something the primary developers are planning to build in so if someone else wants to build this there is data available to help plan out the relevant control loops necessary.
-
Interesting and reassuring comments. Thank you!
I didn't know that the files worded under secondary but not primary. Good to know.
The reason I suggested a few features that could be implemented in plugins is that they would likely be more efficient in the main code. Stuff like LeechHammer and MxMonitor add considerable CPU cycles and contribute to instability (and the WinMX file save bug exacerbated that, since MxMonitor might crash WinMX and cause it to lose files it was saving and create lost clusters). It could cause race conditions or protection faults. And yes, it was MxMonitor I used to use, not LeechHammer. Thank you for reminding me of that. So that is why I'd throw in leech detection as an option if I were a developer. That way, it could cut network usage if instability from 3rd party programs were a problem.
One thought came to mind. Why not keep the "MX1" protocol even after adopting "MX2" and make them configurable (ie., enable support to access those who have not switched over, but the option to disable during an attack)? Similar to how other clients handle G1 and G2, but with our protocols.
On "traffic amplification attacks", I assume DOS or DDOS are the more common terms, though they are broader in meaning in that they impact all Internet traffic.
-
i would hope that the new client eventually drop everything broken about the old client rather than try to remain compatible... once the switchover to the new client is complete there is no reason to keep backwards compatibility....
i also hope no 'trade-only' type stuff makes it into the new client... leech i get but no 'trade'...
-
The protocols will have to be changed to stop the attacks and to move forward with things other than just the protection against the attacks.
As far as the trade and leech controls, There is enough division within the community about these topics that they won't make the top of the to do list for the ourmx developers until there is nothing else that the community is of one mind about left to do. I don't see a 'todo' list that short for some time.
-
i would hope that the new client eventually drop everything broken about the old client rather than try to remain compatible... once the switchover to the new client is complete there is no reason to keep backwards compatibility....
i also hope no 'trade-only' type stuff makes it into the new client... leech i get but no 'trade'...
I can see either argument, either in including 2 protocols and the ability to disable the old one, or just doing a full port. Maybe keep OpenNap mostly like it is or harden it in either case.
Yes, I can see adding anti-leech stuff, so long as it is off by default. Like you, I don't agree with the forced-trading stuff. Keeping off non-contributors is one thing and can help the network, and also increase the incentive of being vigilant by distributing the risk. But the force-trading seems more harmful. If you want to do that, why not just transfer through OpenNap or some closed channel way, like PM emails for sharing? Or write a client just for them, possibly reusing code from an existing project.
Anyway, I think a leech option should be added, in part to enhance security. Think about it, what if add-ons compromise the network in some way? So there is incentive to add some of the most popular features from add-ons since they would be tested and designed into the project and thus not some shoddy 3rd party addon that does more harm for everyone than good.
I can understand why the protocols would need to be updated, and understand that hardening is only a part of it. With G2, I was around a forum where the young guy who proposed it unveiled it. He made it extensible and packet-oriented, so it had backwards and forwards compatibility built into it. The older clients could safely ignore parts of the payload they could not understand, and newer clients could add client-specific features and coexist with other, slightly different clients on the same network. G1 was unable to do that, and if reverse compatibility between subversions was a goal, it would have to be specifically programmed (otherwise, "snubbing" would result). I am curious what other features a MX protocol update would provide besides security.
-
Hopefully re-implement the regionalisation across the network which is not currently managed (in part due to the caches but also down to the patch as well, with the new client handling the region markers correctly this should change).
Perhaps add extra new packets to support ipv6? Won't be able to use the current packets as the IP field is a 4 byte field which will need increasing in every packet. Therefore as well as forward and backward compatibility it will need to be ipv4 and ipv6 compatible.
-
Hopefully re-implement the regionalisation across the network which is not currently managed (in part due to the caches but also down to the patch as well, with the new client handling the region markers correctly this should change).
Perhaps add extra new packets to support ipv6? Won't be able to use the current packets as the IP field is a 4 byte field which will need increasing in every packet. Therefore as well as forward and backward compatibility it will need to be ipv4 and ipv6 compatible.
I see. Yes, the regional thing was temporarily discarded trying to get WinMX to working. Before, it had hard-coded IP addresses to servers in different places. It was much easier to replace some of the addresses rather than all, or to reassign most to the same place. It worked, but no longer had preference to shorter distances. Now, the goal is just to get it to working. The primary side needs more work.
-
Theres no need to hard code ip addresses using them, but maybe prioritise connecting to primary nodes that are in the same region before seeking those outside.
Primary isn't too bad tbh. So long as there are some type of check placed on the packet contents and dropping primary connections that are sending dodgy packets, then in theory the spammers would be dropped off the net and wouldn't be able to use the network against itself to pass on their packets. With a new client, adding additional 2 tier packets/types should be easy enough and also would be a perfect time to implement ipv6 imho, then run the client with both sets of packet handling so it can run on the old and "new" network. The file transfer and sharing doesn't really need touching since its direct between one person and another so theres technically no need to fully split the network.
-
The file transfer and sharing doesn't really need touching....
<sarcasm> yes, because leaving 'get winmx' wont confuse those DPI programs at all </sarcasm>
-
I think we looked at leaving things as they where in the transfers dept and made the decision to upgrade that area due to the reasons alluded to in Stripes post above, the GET string is one of the most giveaway clues to the throttling equipment vendors and its the duty of all p2p developers to work on ways to counteract such an own goal, we do after all pay for the bandwidth we use.