Yarn is inherently unpredictable. Some stitches turn out bigger than expected, others smaller. The pattern may cause giant undulations to develop, create holes or make the fabric bunch up tightly.
Though experienced designers may intuit how patterns are likely to turn out, they often rely on trial and error – an inefficient and time-consuming process, particularly for hand-knitting. A new digital tool developed by a team of researchers at Cornell and Stanford University can accurately predict how knitting patterns will look ahead of time – and does it about 100 times faster than existing methods.
“You can make a chart of a pattern that shows a logical grid of stitches, but what you end up with is something completely different from that chart, and these things are all caused by the physics of how yarn interacts with other yarns,” said Steve Marschner, professor of computer science and a senior author of the paper, “Interactive Design of Periodic Yarn-Level Cloth Patterns,” which will be presented at the ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques in Asia, Dec. 4-7 in Tokyo. “If you can answer these questions with a computer, you can save yourself a lot of trial and error and a lot of prototyping.”
The paper was co-written with lead author Jonathan Leaf and Doug L. James of Stanford, also a senior author, and Cornell computer science doctoral students Rundong Wu and Eston Schweickart.
The yarn pattern simulator grew out of Marschner and James’ ongoing research in computer graphics, which examines the behavior of textiles to better depict them in animation. Computer graphics often treat cloth as if it’s flat, he said, though in fact fabrics are three-dimensional – especially those knitted with yarn.
In the past, computer graphics research used textile research to make images more accurate; but in recent years the process has begun moving in the other direction, with discoveries in animation helping to spur advances in manufacturing and design.
“It’s part of a broader trend that’s going on in computer graphics to take techniques that were built for animation and start using them to build real stuff,” Marschner said. Upgrading design models to be faster and more accurate “is nice for animation, but crucial for somebody who wants to design real things. A nice-looking, plausible picture might be helpful for selling your design to a client, but then nobody’s happy if it doesn’t actually come out that way.”
The simulator, which is not yet publicly available, can generate image predictions using information about the kinds of stitches and patterns to be used, and the qualities of the yarn. It’s far faster than existing models because the researchers designed the algorithms to be performed by the computer’s graphics processing unit, which is better suited to this kind of computation than the main computer processor, and because they developed a more efficient method for predicting which bits of yarn are likely to collide with which other pieces.
While previous simulators may take several hours to generate an image prediction, this one could predict a complex pattern in a matter of minutes, Marschner said – both saving energy and making the computer simulation an easier, more efficient and interactive process.
Eventually, Marschner said he hopes the tool can become part of an interactive design system.
“If you can tweak the design and see what the simulation is doing, then you can have the simulation as your partner in understanding how the design is going to work,” he said.
In part because of the uncertainty of how the finished product will look, designers tend to work by combining known patterns. But with a simulator, knitters or textile manufacturers could be more inventive.
“What we are typically doing now in making clothing is a bit behind what the technology is really capable of. A lot of fabric is being knitted on machines that are capable of knitting absolutely any pattern,” he said. “If you don’t have a feedback loop in your design process other than to go make it and see how it comes out, it’s hard to design something very complicated. So we’re hoping this will open up a lot of possibilities.”
The research was partly supported by the National Science Foundation and Adobe.