About the MC744: at first I thought it looks like "look ahead". But the more I think about it, it most likely means a time allowed for a single block execution vs feed vs positional sample data.
So doubling the value would mean that the actual feed rate is halved. Not sure if this is actually the case, but it does mention that slide movement gets slower by increasing the value. So if the slide movement would be 1000mm/min with C2, C4 would make it 500mm/min, C8 250mm/min and C16 125mm/min. But as I said, I'm not sure of this at all. It NEEDS to be tested!
But about the project. I browsed through the documentation of CNC5000 CPU mod and made a "funny" discovery. Despite the 386 is driven at 20 and 25MHz, with latter meaning nearly double the processing capability, there is more advanced back plane bus architecture with higher bus frequencies (I found 6 and 10MHz (and more offloading capability in the motion controller) vs 1,5 and 3,0MHz), the DMA / CPU wait logic still uses the same 1.5MHz clock signal as does the 8088! Talking about legacy!
I guess the base code is very much the same and they didn't bother to change this for more advanced control.
More power, or go home
Nothing much have happened besides waiting.
I ordered some parts to create a jumper setup to change the 1M5 bus speed to 3M0 and my friend found out that there might be a hidden game in the controller software.
Also, I made a test machining model and program. It's a relatively simple no purpose model and the program has 3D adaptive clearing, morphed spiral, radial and ramp. All are "machined" with an imaginary 20mm/2,5mm bull nose milling cutter and 1000mm/min for all tasks.
Model can be seen here: https://a360.co/33rF2kL
Adaptive has 1mm smoothing and tolerance, so there will be . All the rest are with 0,01mm tolerance and no smoothing. Calculated machining time is 14min 48s.
DON'T DO ANY ACTUAL MACHINING WITH THIS PROGRAM. If you do, stock is X102 x Y102 x Z91mm and prepare for some proper chips while 3D adaptive!
I think that there will be a simple protocol:
- Constant N93 80 (this is important as allowed start moment depends of downloaded lines)
- Set feed override to 100%
- Start BTR transfer and start stop watch to time the machining time
- Cycle start when 1000 lines have been downloaded
- make notes of "Jerk 'o Meter" movement
- mark time stamps when/if machine waits for code
- mark the time of machining when the spindle stops
- If there are many waiting periods, reduce feed override until there are no more and mark the actual machining time.
I will design a simple "Jerk 'o Meter": I'm thinking of making it based on a hacksaw blade and a "wiggle scale" printed on an A4.
Does this sound reasonable?
There is also a machine constant that affects the movement performance. I have it at 2. which means that it takes 30ms to process the movement result.
"MC 744 FEED LIMITATION (1-20*15MS)
The feed is always calculated per sample in a way that the feed per sample is equal
for the total movement.
Due to the fact that the minimum number of samples per movement is 2, there can
be a problem with small movements.
To prevent stuttering of the slide due to variation of the feed output, the minimum
number of samples per program block can be set higher. But, the higher the value,
the slower the slide will move when small steps are programmed.
The maximum number of samples per program block is 20."
Not sure, but I think that if/when the bus speed is doubled, the sample time will be halved from 30ms to 15ms. BUT this can also cause positional errors and that all of the time based values, such as acceleration values, would had to be manipulated according the changed time base. This will be seen.
So, if i understand ok,.....to speed up that CP card, just need to relocate jumper as you mark, and change oscilator ?
BTW, does somebody know, I have some CP board, deflective(chess board on the screen), what can be ?
Sounds like a perfect test item
But by the looks of it, and if you have it set to 6MHz, then yes. Those seem to be the only changes to have approx one third of extra computing power. Sounds like much, doesn't it?!?
I looked at V20 and seems the Chinks are selling faked items, who knew? But here seems to be a legit item: https://www.ebay.com/itm/224177951505
Here is some info for purchasing, seems many has had success with the cheap Chinese ones, too: https://www.vcfed.org/forum/fo…s/60635-nec-v20-s-on-ebay
I could send you a slow cpm module - I think I got plenty...
I suppose I could make the changes, but how about the machine parameters as I have different servo's etc., but I suppose I could see if it works at all?
I made some data digging from the data file florian sent me and made some quick discoveries - in the middle of the night, why not. Last night that was. And this whole evening toda... uhm yesterday . And now
- It seems that the Graph Mod and Input Output mod don't use CL1M5 or CL3M0 clock signal from the bus. Graph Mod has its own clock generator for the on board 8088 and its peripherals
- Control Telet uses 1M5 clock for the keyboard controller, and it has a separate generation for 6MHz clock.
- RM boards uses the 1M5 bus clock signal to clock various operations
- LS would use the 3M0 and it has many similar components such as the 8253 that handles bus and timing operations.
It also seems that the servo drive circuit is independent from any outside clock source fir its operations and if I understood it correctly, the speed has increased by the increase of clock speed and calculation capacity, not sure of this though, had to ask my friend who knows way more than me about these things.
What I'm thinking is to double the 1M5 signal that is generated in the CPU module to 3M0. Easiest way to do this would be by putting two DIL14 sockets in place of 74HCT393 counter located on the CPU board and receives its clock signal from the 24MHz oscillator: first socket to PCB as normally, chip is inserted to the second socket that has pin no 6 cut off and a jumper lead from pin 5 to pin 6 mounted on PCB.
Or might be that I simply put a socket, cut the pin 6 off the IC and place a jumper lead in between pins 5 and 6. Those chips are some 2€/piece so no worries.
There are three possible outcomes:
1) Nothing special happens in one way or the other
2) Improvement in motion control as feedback input signal processing works as twice as fast
3) Mayhem with Control Telet and input measurements going haywire
I hope something happens, let it be two!
I received the A82380-25's I ordered. So...
I made three tests:
1: Install the new DMA controller and replace the 40MHz oscillator with a 50MHz one, hence raising the processor speed from 20 to 25MHz.
The machine booted, but when I was transmitting the parameters it gave me a memory parity error and as I tried again, it gave an INT 13 error. So 25MHz is too much. I asked my brain-friend if changing the faster memory chips would help, so this needs further investigation.
I ran two test runs with some random program I had in "contour" folder, have no idea what it is, but it has some tiny G1's and with 140% feed override it runs at 980mm/min. Not sure if it's true as I have no idea of the model it makes so I can't check the calculated machining time in Fusion360.
MC93 / BTR memory 80kb
Start time at starting the file transfer
Cycle start when 1000 lines is transferred
End time at M5
"Jerking index" is determined by visually observing the right side splash guard "wobbling"
2: Test run at 20MHz / TI486DLC
Time: 11min 45sec
Data transfer ahead at all times
Low to moderate "jerking index"
3: Test run at 16MHz / TI486DLC
Time: 13min 00sec
Buffer runs out approx after 950 lines of code due to data starvation and after that every 900-1000 executed lines.
moderate "jerking index" / not as bad as I thought
- I will make some random design in Fusion and create various tool paths with different feed rates and we can use it as a test standard when determining the machine capabilities and actual speeds with different CPU's and possible upgrades.
- I will also try to make an easy and cheap to reproduce "jerk o' meter" so we can get some "index numbers" with using the standard test file.
- I will then make test run also with the stock 386DX-16 and 387DX-16 CPU's
So far what I dug up:
Seems my speculations were pretty much spot on
Oscillator -> D8284 that creates clock signal that is 1/3 of the oscillator freq and 3 or 4 MHz PCLK that is 1/2 of the Clock signal
3MHz is further processed to 1.5MHz with a 74LS163 and there is another 74LS163 that is not used if oscillator is a 18MHz and CPU runs at 6MHz.
Here comes the fun part: If you have a 6MHz CPU (18MHz oscillator), you can throw in a 24MHz oscillator and place a jumper next to the 74LS163 (could probably be a 74LS363 too) to different position and it creates a 3MHz signal.
There are two 74LS163's. It the oscillator is 18MHz, then the 8284 outputs give 6 and 3MHz, as described above, and the other 74LS163 is being used to create the 1.5MHz, they seem to divide the input freq by four.
So, by soldering a jumper to another spot and changing the 18MHz oscillator with a 24MHz, you can increase the clock speed from 6 to 8MHz, hence giving a nice computing power increase. And if you purchase a legit V20 etc processor (there are counterfeits out there!), you can gain some more because of more powerful CPU architecture (or some $hit). I feel this mod is safe as it is described in the documentation!
Lifting the freq higher than that will need "advanced" tasks as the control bus frequencies are generated by the 1M5 and 3M0 (1,5 and 3,0MHz respectively). At least interrupt control and serial data peripheral has the 1M5 wired in, but wait state logic is based on Clock signal, so I guess it would be possible to hack the damn thing so that the 1M5 and 3M0 are still being generated by replacing the counter chips.
For example using a 16MHz capable NEC CPU at 12MHz. It would require a 36MHz oscillator (36/3=12). Then the 8284 would give out a "6M0" from PCLK that would be needed to divide by two and four to get 3M0 and 1M5 respectively. Other option is to use the OSC output and divide it by 12 and use the 3M0 as it is used originally, as described above. This means that the 74LS163, that would be activated in case of clock change from 6 to 8, would be replaced by something that can transform 8284 output (OSC / 36MHz or PCLK=6MHz) to 3M0.
PCLK could easily be retrieved from where the jumper would be removed.
It is possible that in case the clock would be raised to 12MHz, the memory chips would need to be replaced with faster ones. Didn't look at the EPROM's, but they are 300ns and SRAM's are 200ns. A 16MHz 386 system uses 200ns and 120ns respectively, but I didn't notice any problems with 20MHz clock so might be that there wouldn't be any issues. But worth a mention. Also, I would suggest to get a spare CPU module to play with.
It seems also that the 1.5 and 3.0MHz signals are the legacy burden that even the 386 has to bear... Have to dig in some more!
Is it possible to upgrade processor card number 4022 226 3340
I have no idea, but seems I received a precious data package from a true Maho angel and will definitely ruin many nights of my sleep! We'll see!
Zitat von V3SDave
Whaaaaat an 8088 Upgrade What i have to do ?
Let's keep the 386P upgrade thread clean from other CPU upgrade possibilities, so I started another topic. Didn't realize there was interest for this!
Yes, the NEC V20, V20HL and I suppose V30, too, would be all suitable as performance adding replacements, but that all allow also lifting of the clock speed: get one, and plug it in and make tests before and after.
For example, make a program that has a lot of 3D countouring and see how the machine performs and time how long does it actually take to finish the task. Then plug the upgrade in, and run the same test run again and compare the times and visually compare if there is any difference in "jerking motion".
I found a rather good image of a 8088 CPU board, not sure if yours is the same though, I believe you run a 432/9 control, I'm completely unfamiliar with that.
Anyway, let's assume that the board is the very same and only difference are the EPROM's, if even them: We can see that there is only one oscillator running at 24 MHz. As we know from the 386 upgrade topic, there, too, is a 24 MHz crystal oscillator and we know that it runs there only the control bus and that three different frequencies are made based on that 24MHz by a ripple counter chip that gives out 1.5, 3.0 and 6.0 MHz frequencies.
Now that we see only one oscillator, we can only assume that there is only one oscillator to give clock frequency to whole system. This possibly makes raising the CPU frequency a quite much more difficult task and without having the PCB layout / repair diagnostics documentation, it is impossible to say how the clocking is being issued on that board.
If/when there is a clock processing chip somewhere on the board (those P8253 at the bottom right are timing devices with two clock inputs), it would be beneficial to know if the memory operations and CPU are using the same clock speed, then it could be possible to make simple modification on board that there would be a separate clock for previous operations and leave the control bus handling alone. Simpliest way to make this, I guess, would be installing an extra socket under the CPU and cut off the CLK signal and do the same for the clock processing chip and cut off the clock signal that handles CPU etc and install a separate oscillator and ripple counter (for example) to be used used to input clock signal to cpu etc. I guess a separate test PCB would do the trick.
Those P8253 suggest that timing logic is similar to what the DMA controller on 386P board does, I _guess_.
8088-2 in the picture can run at 8MHz. If the clock processing is the same as in 386P, then it would probably run at 6MHz. It could also run at 8MHz (24/3). Latter might be the case if the unreadable chip to the right next to the oscillator is a D8284, as it divides the oscillator frequency by three (1/3). Its datasheet says that it provides clock signal to CPU and to all bus devices that are connected to CPU. I think it's highly unlikely that the control bus speed would be 8 or even 6MHz as more modern control has 1,5/3M clock, but by those pics, I can't say where else clock signals are being processed/generated.
I suppose that 24MHz clock is a legacy issue in general as old modules were recycled to newer controls and they needed to work?
Btw, I hope I write such sensible English that something can be understood by using a translator
Now I'm thinking of upgrading the control bus speed (there are two, what are their purposes?) to (possibly) increase the actual feed rate while processing those tiny G1's. As we can see from the video, the difference in actual feed rate is quite remarkable between G3 movements and G1's.
I can foresee several problems in doing this:
- how the timing of other cards is affected
- are the control bus clocks being used to time the measurement processing and in time vs motion calculations. This can be a game stopper unless can remedy by machine constants or "doesn't matter".
- are the control bus clocks being used in A/D and D/A conversion in servo control and possible analog measurements (such as in temperature compensation). This can be a game stopper.
- there are probably more issues related that are probable game stoppers.
Some of us know that there is a 1.5MHz clock signal, generated from the 24MHz system clock oscillator, to the A82380 DMA controller. It is used as a secondary clock signal regarding to programmed (in OS or Boot PROM?) wait state registers. I can assume that this is the clock signal that is used in control commands. I can assume wrong here, can't say without detailed documentation, but it sounds logical to use an operation critical bus in terms of wait state generation / processing.
What would be interesting to know is that for what are the 3MHz and 6MHz clocks for, they are being generated by the 74HCT393 chip. 3 and 1.5 go to the bus and DMA controller, but 6MHz doesn't seem to go anywhere.
Why would it be possible? Even the original XT ISA bus was running at 8MHz with similar or older components than are used here.
Benefit, if succesful, would be of course that speed increase would make faster command and confirm cycle and thus increase the biggest possible feed rate while using large amount of short G1's, which would make it more pleasant to use the old control instead of retrofit. Of course, retrofit brings the game to a whole new level in some parts, but Philips control seems to be quite capable and speeding it up seems to help.
What I would need, are the repair documentation for the other modules to make a series of educated guesses if bus speed upgrade is possible. Not sure if they exist anywhere that would be considered as "available", and pretty sure that they are not available to someone like me from DMG... So not getting the documents could be the first and the ultimate game stopper...
So far, the upgrade has cost 80€ for (if I recall correctly. It came in damaged and eBay seller refunded in full though ) a mother board with 486DLC and ULSI FPU and 2€ for the 40MHz crystal oscillator and they made a worlds of difference than what it was. I also purchased faster 25ns SRAM's (overkill) to replace the 120ns ones, but haven't yet needed them. So in cost wise, overclocking and CPU replacement seems feasible. However, more tests need to be done, but so far it sure looks good.
G52, G54.. usw wird mit punktdefinition gemacht. Da ist parameter N bei messycklen für punktdefinition.
N parameter ist unter "Menu-G", nicht in "Free-G".
Oh, but there are plenty of upgrade possibilities for previous generations of CPU's as well!
For example, replacing a 8088 can be replaced with a NEC V20/V20HL upgrade CPU, which is claimed to give a whopping 30% (not sure if true) advantage over the 8088 with just dropping it in - aaaand it can be clocked to 16MHz where as the 8088 only (formally) goes to 8MHz! (there is a 8088 on the Graph mod, btw, upgrade it why not :D). There is also a V30 model, but at least on eBay, there seems not to be too many available.
And they are cheap: https://www.ebay.com/itm/221579225330
For 286 units there was an array of various upgrades as they were still in use at early 90's. I've seen remarks of even 486SLC2 (Evergreen Rev To 486) implementation for 286, but I take these are rare as hens teeth. Also, there are 25MHz PGA68 Harris made 286 that seemingly can give performance near to a 386 (https://www.vogons.org/viewtopic.php?t=46350) Plenty of these on eBay, even brand new ones!
I would say that as the engineers were facing lesser computing power, they had to make very efficient code, so I guess there are relatively big gains available, at least for the 8088 guys who can only make a drop in upgrade as there is no soldering required.
Das ist nicht was meine ich. Spindelorientierung ist nicht richtigt, aber vielleicht 'support für Spindel schwenk position' ist?
Haha, it's there, and am quite serious 😁. The theory was right, it seems 🥰
I bet eBay will now be emptied from 486DLC's by 'Mahoites' 😂
Support für Spindel Orientierung bei manuell Spindeldrehung / "Tool Orientation" in Fusion 360.
Huge success so far!
Tested cycles: - Horizontal with mostly G3 movements
- Spiral with concentric circles (both G3's and G1's), Tolerance 0.01mm
- Ramp, Tolerance 0.01mm
Original CPU: 386DX-16 + 387DX FPU, running at 16MHz / 32MHz oscillator
- Data runout of BTR buffer during BTR/DNC transfer while multiple small G1 moves because of reducing transfer speed
- Stopping the movement for waiting more data
- "Movement jerking" when feed rate is too big.
Tested hardware: TI486DLC-33 +Math Corp ULSI FPU, running at 20MHz / 40MHz oscillator. Oscillator is installed in a DIL14 socket for easy replacement
- CPU failure at startup diagnostics which seemengly can be ignored, but needs manual action
- No more BTR buffer under run, data transfer keeps ahead all times - Movements get slower while small G1 movements, but no serious jerking! - The "stopping movement" happens when concentric spiral enters next Z level with smooth transition
- Difference in speed between small G1's and G3's is obvious.
- Tested with feed override, no effect in data transfer even at 1120mm/min!
To be done: Replacing the A82380-16 DMA controller with A82380-25 and further testing with a 50MHz oscillator
-!ACHTUNG!-: Detaching the oscillator from 386 board: two of the leads on PCB are at the top level, opposite to the soldering joints, and care must be taken not to damage the board while heating and detaching the oscillator!!
Canned autism, enjoy!:Externer Inhalt www.youtube.comInhalte von externen Seiten werden ohne Ihre Zustimmung nicht automatisch geladen und angezeigt.Durch die Aktivierung der externen Inhalte erklären Sie sich damit einverstanden, dass personenbezogene Daten an Drittplattformen übermittelt werden. Mehr Informationen dazu haben wir in unserer Datenschutzerklärung zur Verfügung gestellt.
No laughing back here. Machining is summary of errors and everything affects to the end result. And in micron scale, even tenths of degrees C.
I will make the same runs you did so we can compare if it's beneficial to invest in four digit (price wise) hardware.
Btw, I don't fully understand the G146 etc / how they differ from the basic commands you mentioned / what's the actual purpose of them?
But meanwhile, I have news for the CPU upgrade, have to upload an extreme autism video first. And also, last night I purchased one of these: https://www.emgprecision.com/uts-2-tool-height-setter. I was looking for a Renishaw Primo LTS (850USD on eBay) and KGS on eBay, too, for ~290€, but saw the EMG unit and wanted that! I'll make a thread for it once I have it!
It is! I'm rather excited as this will be a huge time saver, and diminish the possibility to screw up in centering the hole! Not to mention about inspecting the work piece and correcting tool offsets etc.
Already drooling while thinking of getting a tool setter