Page 92 of 118 FirstFirst ... 4282888990919293949596102 ... LastLast
Results 1,366 to 1,380 of 1768

Thread: Glowforge release

  1. #1366
    Join Date
    Mar 2014
    Location
    Iowa USA
    Posts
    4,441
    They spent a LOT of money and time to make sure the machine does not have a motion controller just the stepper motors and drivers. Why? To insure you always had a way to control the Customers and to Collect money!! From what I saw in the tear down of the machine, there is no room for a conventional controller.
    Retired Guy- Central Iowa.HVAC/R , Cloudray Galvo Fiber , -Windows 10

  2. #1367
    Join Date
    Mar 2005
    Location
    Anaheim, California
    Posts
    6,903
    Quote Originally Posted by Jerome Stanek View Post
    What a lot of the people that already received their units is why it doesn't have a lan connection. You are forced to use wifi to run it even if they release a firmware that can use your own computer to control it.
    The control board photo at https://github.com/ScottW514/GF-Hard...dware-Overview shows an unpopulated connector (J117) labeled "Ethernet module". At that point you only need something the size of an Arduino shield and a cable. (I have no idea how they know that's what the connector is intended for except possibly (1) which pins on the ARM chip it's connected to or (2) some silkscreened label on the backside of the physical board.)

    There are also two USB connectors (J12, J13) that could be used if someone were so gauche as to want to connect it directly to a PC.
    Yoga class makes me feel like a total stud, mostly because I'm about as flexible as a 2x4.
    "Design"? Possibly. "Intelligent"? Sure doesn't look like it from this angle.
    We used to be hunter gatherers. Now we're shopper borrowers.
    The three most important words in the English language: "Front Towards Enemy".
    The world makes a lot more sense when you remember that Butthead was the smart one.
    You can never be too rich, too thin, or have too much ammo.

  3. #1368
    Join Date
    Mar 2005
    Location
    Anaheim, California
    Posts
    6,903
    Quote Originally Posted by Bill George View Post
    From what I saw in the tear down of the machine, there is no room for a conventional controller.
    How much additional room or, more to the point, additional hardware do they really need? (See my post #1355)
    Yoga class makes me feel like a total stud, mostly because I'm about as flexible as a 2x4.
    "Design"? Possibly. "Intelligent"? Sure doesn't look like it from this angle.
    We used to be hunter gatherers. Now we're shopper borrowers.
    The three most important words in the English language: "Front Towards Enemy".
    The world makes a lot more sense when you remember that Butthead was the smart one.
    You can never be too rich, too thin, or have too much ammo.

  4. #1369
    As someone else with additional machine control and drive experience, I'll chime in, and say there's a lot of mostly right information, but a few conclusions are wrong. I'd like to say beforehand I'm not trying to approve of or decry the merits of the following, only clarify things.

    First, it's literally impossible to directly control the steppers with ethernet, for a few reasons. First, let's look at how you control steppers. The traditional way is to use step and direction pulses to move. The timing of these pulses is critical. Ethernet packets do NOT exist as a stream of pulses; they are packetized data, and furthermore are NOT deterministically timed. You cannot control any motors with any timing requirements from even "on/off" packets, unless you are OK with jitter on the order of tens of milliseconds. A nontraditional stepper driver, if they decided to reinvent the wheel, would need some way to increment the electrical "rotation" of the output drive signals. You could do this without ever needing "real" step and direction pulses, but like I said there's no real reason to reinvent the wheel, and you don't gain anything performance wise. You might end up with a cheaper part count but that's hard to imagine. I would bet they use COTS stepper drivers hooked up to their system, but I digress; even if this is the way they're doing it, they need precise timing control of the signal generation, which is literally impossible to do with ethernet. It's just the wrong protocol, and wasn't designed to be real-time at all. It's designed for error correction and guaranteed delivery; there's no guarantee of WHEN signals arrive, only that they eventually get there (unless you have a timing critical signal that will drop a packet after a time threshold). Other protocols DO ensure timing, and can run over ethernet *cables* but not "the internet"- see EtherCat, for instance. You also have the issue of dropped packets to worry about, meaning you can't guarantee all packets arrive at the same rate.

    So, in short, it's impossible to directly control a laser with Ethernet pulses. But that does NOT mean that the entire "step pattern" couldn't be buffered to the device beforehand. If you transmitted the step pattern at a known clock frequency (or embedded the clock frequency with the signal, either way would work), then you could buffer the entire pattern onboard. For example, you could have a known "transmit" rate of 1000 Hz, and a stream of 1's and 0's that would represent "Step" or "Don't step". Every 0.001 second, the controller would read the next bit, and if it's a 1, send a Step, if it's a 0, don't send a Step. This would work, but it would likely be a very large data file. It could likely be compressed somewhat (instead of sending 500 zeros in a row, send "0, repeat 500x" which is a much shorter command). But that wouldn't work for lots of systems; rasters would likely be fairly good with this method, but vectors likely would not.

    Still, the above description is a possible method. I would argue that a "motion controller" is a system that does more than repeat a pre-buffered waveform. Most motion controllers I have seen will offer additional features. G-code interpreters, for example, implement acceleration and jerk profiling; even the most basic 3D printer controllers do this. Most also offer "look-ahead" where they can pre-buffer their move signals in order to anticipate upcoming turns or stops. That's the general function of a motion controller; it controls the motion, it doesn't just blurt out a canned train of ones and zeros.

    I'd argue that it's absolutely possible that GF does all of the motion planning in "the cloud" and then transmits the canned pattern of step and direction pulses to a **buffered** onboard storage device, where a simple DMA channel on the processor repeats the pre-defined step and direction signals. In this way, they're doing all of the path planning AND pulse generation in the cloud, buffering them to a known time waveform, then transmitting that waveform down to the individual units. I don't think that counts as having a motion controller onboard.

    The issue here is that basically no one ever implements any motion controller systems this way, and there's no real industry terms for it. If I offered a "motion controller" board that simply repeated a series of pulses that are stored in a file, I don't think anyone would say "Yeah that's a motion controller" because it doesn't do any of the things that "motion controllers" traditionally "do". Does it "control motion" in the literal sense that it's responsible for outputting a series of pulses? Yes. But does it match really *any* feature set of nearly *any* other "motion controller"? No.

    So to depart a bit from the pedantic discussion of terms (which I do think is important, by the way), I'll say that I don't think their method is a good one, and I'm not defending their decision. I just want to point out that there's a WORLD of difference between buffering Gcode and buffering step and direction signals. I could rig an Arduino to repeat canned step and direction signals in an afternoon. Writing a Gcode processor from scratch (that's worth actually using) would take a few weeks. They're truly different things.

    To add some commentary on WHY they'd do this, I'll say that I don't think they gain a ton of power like they claim to. Their embedded processor would be more than capable of crunching their numbers. The only benefits they get are that they don't have to program for an embedded controller, they can do it all in Windows. I'll admit it's a neat concept, but I don't think you get anywhere in terms of actual performance.

    Also, to address this post:

    Quote Originally Posted by William Adams View Post
    The interesting thing is, ping times have gotten so fast, it's actually possible to send a packet almost half-way around the world and get a response more quickly than a machine can change the colour of a pixel on a graphics card and the system can get that change to register on a display (though arguably that example is intended as a criticism of LCD latency).

    The things I'd be curious about for such an approach (motion planner in the cloud) would be:

    - how much bandwidth does it go through? Assuming 4 hrs. of operation per day --- what's the minimum data capacity one would need to budget for it
    - how does the system deal with dropped packets?

    The first statement is mostly false, and I can ballpark some answers to the questions. To address the first statement, the distance from Los Angeles to New York is around 2500 miles as the crow flies. It takes light 13.5 milliseconds to get from one place to the other. To send it and get a response would take 27 ms. This is equivalent to having a direct fiber link between the two (in reality, it goes MUCH further than 2500 miles and goes through many switches that delay things, but I digress). My cheapo computer monitor has a 96 Hz refresh rate, equivalent to 10.4 ms. 144 Hz monitors are common, which have a 7 ms refresh rate. It's MUCH faster to change the display than it is to send a packet even across the US, much less around the world. That said, either way it's FAR too slow to use for laser engravings.

    First question, how much bandwidth? We can assume that it takes roughly the same bandwidth to transmit the control signals as it does to transmit the actual image in the first place, but even assuming that the step timing is 10x the pixel density, and that there is some other random overhead that accounts for another 10x, that's still just 100 "pictures" worth per image engraved. In other words, not that much; equivalent to a 30 Hz video running for 3 seconds. Given the time it takes to do an engrave I doubt you'd ever run into bandwidth issues, even on a crappy cell phone connection.

    Second question, dropped packets? I discussed this above somewhat, but basically you have to buffer the entire signal, and use some sort of error correction/retransmit thing. There's zero chance you could control the thing in real-time via ethernet, so you must assume that the signal is buffered.

  5. #1370
    Quote Originally Posted by Bert McMahan View Post
    ...there's a lot of mostly right information, but a few conclusions are wrong...
    ...I would argue that a "motion controller" is a system that does more than repeat a pre-buffered waveform. Most motion controllers I have seen will offer additional features. G-code interpreters, for example, implement acceleration and jerk profiling; even the most basic 3D printer controllers do this. Most also offer "look-ahead" where they can pre-buffer their move signals in order to anticipate upcoming turns or stops. That's the general function of a motion controller; it controls the motion, it doesn't just blurt out a canned train of ones and zeros.

    I'd argue that it's absolutely possible that GF does all of the motion planning in "the cloud" and then transmits the canned pattern of step and direction pulses to a **buffered** onboard storage device, where a simple DMA channel on the processor repeats the pre-defined step and direction signals. In this way, they're doing all of the path planning AND pulse generation in the cloud, buffering them to a known time waveform, then transmitting that waveform down to the individual units. I don't think that counts as having a motion controller onboard.

    The issue here is that basically no one ever implements any motion controller systems this way, and there's no real industry terms for it...
    Really great post, Bert. Thanks for taking the time.

  6. #1371
    Join Date
    Dec 2012
    Location
    Rickmansworth, England
    Posts
    164
    It would still need the data stream to be interpreted and converted as it will need to be spilt into 3 different time controlled variable pulsetrains - one for each stepper axis. The internet information is received as just a serial bit flow and would need to be buffered, analysed, converted and directed to the relevant motor driver inputs. Its like sending the signal from an aerial to 3 TVs and having the local tuner in the TV interpret which of the channels it will display, without local control you would only get 1 channel on all 3 TVs at the same time.
    Trotec Speedy 300 50W
    Gantry CNC Router/Engraver
    Various softwares
    Always keen to try something new

    Please don't steal - the government hates competition

  7. #1372
    Join Date
    Mar 2005
    Location
    Anaheim, California
    Posts
    6,903
    Quote Originally Posted by Steve Morris View Post
    It would still need the data stream to be interpreted and converted as it will need to be spilt into 3 different time controlled variable pulsetrains - one for each stepper axis.
    Four actually: everyone seems to forget that, for rastering, a laser power modulation signal has to be present, synchronized to the X-axis movement. It's roughly as much data as the stepper motor controls combined.
    Yoga class makes me feel like a total stud, mostly because I'm about as flexible as a 2x4.
    "Design"? Possibly. "Intelligent"? Sure doesn't look like it from this angle.
    We used to be hunter gatherers. Now we're shopper borrowers.
    The three most important words in the English language: "Front Towards Enemy".
    The world makes a lot more sense when you remember that Butthead was the smart one.
    You can never be too rich, too thin, or have too much ammo.

  8. #1373
    Quote Originally Posted by Steve Morris View Post
    It would still need the data stream to be interpreted and converted as it will need to be spilt into 3 different time controlled variable pulsetrains - one for each stepper axis. The internet information is received as just a serial bit flow and would need to be buffered, analysed, converted and directed to the relevant motor driver inputs. Its like sending the signal from an aerial to 3 TVs and having the local tuner in the TV interpret which of the channels it will display, without local control you would only get 1 channel on all 3 TVs at the same time.
    Exactly my point, you require a buffered signal. The argument as I see it is "does uncompressing a step/direction stream and spitting it out without analysis count as a motion controller", which I argue it does not. I've never seen a device offered as a motion controller that does something this simple. It does "control motion" but the industry accepted term "motion controller" always includes MUCH more functionality than this.

    Quote Originally Posted by Lee DeRaud View Post
    Four actually: everyone seems to forget that, for rastering, a laser power modulation signal has to be present, synchronized to the X-axis movement. It's roughly as much data as the stepper motor controls combined.
    Didn't forget, that's where my calculation came from, and in fact it's WAY more data for a grayscale raster image. For a raster image, your step sequence is given by an acceleration ramp, a constant speed, then a deceleration ramp. Assuming that the deceleration ramp is simply a reversed acceleration ramp, and that you use the same X ramp for all rows of the image, then it only gets transmitted once and has zero "power" information. Double that since you need the same thing for Y, but again that's just two waveforms; not much data. You would then just transmit a constant timebase (a single value) for the constant speed section, which is again a very tiny amount of data. Last you have the power density map, which can be as little as 1 bit per position or as much as 256 bits per position; either way it's on the same order of magnitude as the original image. In short, you need very, very little data to describe the position of the head, and lots of data to describe the power.

    For a vector image, your power settings are basically constant for each move, so the positioning data FAR outweighs power data in that case.

  9. #1374
    Excellent information Bert! Thanks for sharing it.

    My question is, since they have obviously created some "proprietary" software on the server side to act as a motion controller, how's that going to work if someone wants to yank the cord on the thing, or if they go belly up (which I doubt will happen in the near future)? They'd have to give you some pretty specific code that acts as a motion controller and sends the data to your GF? I don't see them ever letting that code get in the wild.
    Lasers : Trotec Speedy 300 75W, Trotec Speedy 300 80W, Galvo Fiber Laser 20W
    Printers : Mimaki UJF-6042 UV Flatbed Printer , HP Designjet L26500 61" Wide Format Latex Printer, Summa S140-T 48" Vinyl Plotter
    Router : ShopBot 48" x 96" CNC Router Rotary Engravers : (2) Xenetech XOT 16 x 25 Rotary Engravers

    Real name Steve but that name was taken on the forum. Used Middle name. Call me Steve or Scott, doesn't matter.

  10. #1375
    Join Date
    Sep 2009
    Location
    Medina Ohio
    Posts
    4,516
    Quote Originally Posted by Scott Shepherd View Post
    Excellent information Bert! Thanks for sharing it.

    My question is, since they have obviously created some "proprietary" software on the server side to act as a motion controller, how's that going to work if someone wants to yank the cord on the thing, or if they go belly up (which I doubt will happen in the near future)? They'd have to give you some pretty specific code that acts as a motion controller and sends the data to your GF? I don't see them ever letting that code get in the wild.
    I could see them charging at some point

  11. #1376
    Quote Originally Posted by Scott Shepherd View Post
    My question is, since they have obviously created some "proprietary" software on the server side to act as a motion controller, how's that going to work if someone wants to yank the cord on the thing, or if they go belly up (which I doubt will happen in the near future)? They'd have to give you some pretty specific code that acts as a motion controller and sends the data to your GF? I don't see them ever letting that code get in the wild.
    I picture a hardware device that acts similar to a print server being required. Proprietary and most likely pricey. Not open source.
    I design, engineer and program all sorts of things.

    Oh, and I use Adobe Illustrator with an Epilog Mini.

  12. #1377
    Join Date
    Mar 2014
    Location
    Iowa USA
    Posts
    4,441
    Bret thanks for your common sense reply and recap, makes a lot more sense than the other lengthy reply that was posted just a short time ago.
    Last edited by Bill George; 12-12-2017 at 5:57 PM.
    Retired Guy- Central Iowa.HVAC/R , Cloudray Galvo Fiber , -Windows 10

  13. #1378
    Quote Originally Posted by Scott Shepherd View Post
    Excellent information Bert! Thanks for sharing it.

    My question is, since they have obviously created some "proprietary" software on the server side to act as a motion controller, how's that going to work if someone wants to yank the cord on the thing, or if they go belly up (which I doubt will happen in the near future)? They'd have to give you some pretty specific code that acts as a motion controller and sends the data to your GF? I don't see them ever letting that code get in the wild.
    This would be a massive concern of mine as well. It could be fun to reverse engineer their protocol and try to figure out a Gcode to Glowforge interpreter, but I sure don't want my laser dependent on some kind soul decrypting their proprietary data format!

    Not to mention (and I know this isn't a "pro grade" machine) but what kind of assurance do we have that any designs we upload aren't shared? What about ITAR restrictions?

  14. #1379
    Join Date
    Mar 2005
    Location
    Anaheim, California
    Posts
    6,903
    Quote Originally Posted by Bert McMahan View Post
    Exactly my point, you require a buffered signal. The argument as I see it is "does uncompressing a step/direction stream and spitting it out without analysis count as a motion controller", which I argue it does not. I've never seen a device offered as a motion controller that does something this simple. It does "control motion" but the industry accepted term "motion controller" always includes MUCH more functionality than this.

    Didn't forget, that's where my calculation came from, and in fact it's WAY more data for a grayscale raster image. For a raster image, your step sequence is given by an acceleration ramp, a constant speed, then a deceleration ramp. Assuming that the deceleration ramp is simply a reversed acceleration ramp, and that you use the same X ramp for all rows of the image, then it only gets transmitted once and has zero "power" information.
    When I said "roughly the same", I meant you have two bits each (step/direction) for X and Y (assuming Z fixed for the duration) and 4-8 bits of power information at each laserable position.

    But that assumes a "do everything at the mothership" approach. Once you start talking about acceleration ramps etc being implemented at the machine end of the connection, well, that sounds more and more like a "motion controller".
    Yoga class makes me feel like a total stud, mostly because I'm about as flexible as a 2x4.
    "Design"? Possibly. "Intelligent"? Sure doesn't look like it from this angle.
    We used to be hunter gatherers. Now we're shopper borrowers.
    The three most important words in the English language: "Front Towards Enemy".
    The world makes a lot more sense when you remember that Butthead was the smart one.
    You can never be too rich, too thin, or have too much ammo.

  15. #1380
    Quote Originally Posted by Lee DeRaud View Post
    When I said "roughly the same", I meant you have two bits each (step/direction) for X and Y (assuming Z fixed for the duration) and 4-8 bits of power information at each laserable position.

    But that assumes a "do everything at the mothership" approach. Once you start talking about acceleration ramps etc being implemented at the machine end of the connection, well, that sounds more and more like a "motion controller".
    Fair enough, but you don't even need two bits for X and Y. Assuming it's not speeding up or slowing down during the move, the only information you need is the time between steps and the number of steps in a row. The time between steps is (probably) a constant value for all rows, and the number of steps in a row is given by the number of "pixels" of dots in the "power" array. You literally only need like 3 values to get the position grid for your X axis, and they're repeated over the whole thing. The power information is different across the entire image.

    And I don't think the acceleration ramps are implemented at the machine end (they may be, who knows) but again assuming they're doing processing in the cloud, you could encode that as a series of time delays between each step. For example, the series [100 90 85 80 78 75 70 70 70 70...] could be interpreted as "Step, wait 100 clock cycles, step, wait 90 clock cycles, ...etc". It's not being calculated on the machine end, it's just being read out through a file. This series could result in a half sine motion, a polynomial motion, trapezoidal, splined, you name it. They would change the step generation method on the "cloud" end and your machine would repeat that motion profile none the wiser. They could do any pattern of motion they want, calculated on a server, then send it to your machine. I think that's likely what they're claiming they're doing.

    Again- I don't know any of this at all. I'm just playing devils advocate to point out how it's possible they're not doing any actual path planning or "motion control" on the machine end. I don't think it's an ideal solution at all, I just like thinking about how things may work

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •